Тёмный
Nono Martínez Alonso
Nono Martínez Alonso
Nono Martínez Alonso
Подписаться
Creative Coding, AI & the Getting Simple podcast.

👋🏻 Hi, I'm Nono. Here's what you can find on my RU-vid channel.

👨🏻‍🏫 Weekly Creative Coding & AI streams and videos.
🎙 Getting Simple, a podcast on simple living, lifestyle design, creativity, tech & culture.

🎥 Subscribe to get notified when I go live and upload new videos. › bit.ly/2YsRssM
💬 Join the Discord community and say hi! › nono.ma/discord
🗓️ RSVP and follow future events to your calendar › lu.ma/nono

-

🎙 Podcast › gettingsimple.com/podcast​
🗣 Ask Questions › gettingsimple.com/ask​
💬 Discord › discord.gg/DdsefVZ​
👨🏻‍🎨 Sketches › sketch.nono.ma​
✍🏻 Blog › nono.ma​
🐦 Twitter › twitter.com/nonoesp​
📸 Instagram › instagram.com/nonoesp
Andy Payne: Grasshopper 2
15:02
21 день назад
Find VOICE MEMOS on Mac
4:22
Год назад
WebSocket Client in TypeScript
6:56
Год назад
WebSocket Server in TypeScript
7:33
Год назад
DALL-E Copies My Hand Drawings
2:08
2 года назад
DALL-E 2, Explained
10:39
2 года назад
Live Sound Effects with Farrago
5:03
2 года назад
It's Nice to See You, In Person
11:24
2 года назад
Комментарии
@HideBuz
@HideBuz 3 дня назад
Love your content! I was just learning about yjs!
3 дня назад
Hey! I'm so glad to hear you're enjoying these videos. Thanks so much for letting me know! =) Nono
@user-xb7ds7ne2m
@user-xb7ds7ne2m 4 дня назад
what system are you using on your computer
4 дня назад
Hey! I was using whatever version of macOS was the latest on Oct 2021 =) Is that what you're referring to?
@user-xb7ds7ne2m
@user-xb7ds7ne2m 3 дня назад
Yess , thank you. Great video btw🎉.
День назад
THANKS!
@nealsmith2589
@nealsmith2589 4 дня назад
Hello Nono, I tried this but once I type out everything, when I press enter nothing happens.
4 дня назад
Hey, Neal! What command are you running or what/where are you typing?
@coolrexchannel
@coolrexchannel 5 дней назад
Hello, I'm making a drawing app using react and I'm using this library for drawing. I'm trying to make this work with pressure but my pen scrolls the page when I try to draw something. Do you know how the pressure is usually made in drawing apps that use js? I tought it was made by just taking the event.pressure property but I don't think this is how the other people do it.
4 дня назад
Hey! I haven't tried this yet, but I assume there's no pressure when using your mouse. You'll have to work with devices (like a tablet or pens like the Apple Pencil) which support pressure. Maybe those are exposed via the event.pressure API.
@kristomik938
@kristomik938 10 дней назад
thanks for the video
10 дней назад
Hey, @kristomik. I'm glad you found this useful!
@imranhaque4193
@imranhaque4193 20 дней назад
How do you make the background white in an instant?
17 дней назад
If you mean the background of the camera's video white or transparent, you may be looking for Descript's Green Screen feature → www.descript.com/tools/green-screen
@IleniaQuintero
@IleniaQuintero 21 день назад
Hello, I was looking at your video channel. We may be helping a company that uses secure images to increase supply chain security and help cloud native development. Would you be willing to help try their software, make a video, and help show devs how to use their tools? This is not an offer, but just to start a conversation about your willingness to take on sponsorship. Please provide me with your email if you are interested. You'd have a chance to look at their technology and decide if it's the type of software that you'd be interested in covering in your channel.
18 дней назад
Hey, Ilenia - you can message me at nono.ma/contact/form =)
@merion297
@merion297 28 дней назад
Thanks bro! It's a joke there's no a one-click function in the application… 🤦‍♂
25 дней назад
Agreed! =)
@cuoiaubai2732
@cuoiaubai2732 29 дней назад
Can you teach me how to handle the logic in the newly created command? For example, create a new php file
25 дней назад
Hey! You can put any PHP logic inside of the command. =)
@dorrakadri1474
@dorrakadri1474 Месяц назад
Do you have any idea how we set initial values in a doc i have a file tree and when the user click on a file the path will change and he will connect to a new provider websocket but i need to have initializes values
29 дней назад
Not sure if this is what you mean but you can set initial data programmatically. // Import Yjs import * as Y from 'yjs'; // Create a new Yjs document const ydoc = new Y.Doc(); // Create an array in the document const yArray = ydoc.getArray('myArray'); // Add initial data to the array yArray.push(['item1', 'item2', 'item3']); // Optionally, log the array contents console.log(yArray.toArray());
@dorrakadri1474
@dorrakadri1474 28 дней назад
tthank you for the reply , he problem is that im using it in a use effect and with y-socket so the content get duplicated im using monaco editor with multi tabs is there any way to get the doc already connected to the provider ?
24 дня назад
I haven't worked with this code for a while so I don't know exactly. Is this something you would be able to solve with a global application state? Here's a simple React app that uses rko to manage state → github.com/nonoesp/live/tree/main/0058/esbuild-react-ts-v3 I hope that helps! Nono
@GeorgeHzMusic
@GeorgeHzMusic Месяц назад
Is there a way to select all voice memos on the mac app and drag them all to a separate folder on mac? Im trying to preserve the names of the notes and doing it one by one will take forever (for some reason cmnd+A is not available)
29 дней назад
I wish! But this is not supported.
@chosen3494
@chosen3494 Месяц назад
Github repo?
Месяц назад
Here you go! github.com/nonoesp/creative-image-generation =)
@RapDaddio
@RapDaddio Месяц назад
Thank you, Nono. Very helpful.
Месяц назад
Thanks for letting me know. =)
@spaideman7850
@spaideman7850 Месяц назад
Why Mac need users to have PhD to accesst to their own Voice Memo files? Doesn't work Users/yourname/Library/Application Support/com.apple.voicememos/
Месяц назад
It's hard, I know. I think they're hiding these away for security from the file system. But it may also be because they want you to use Voice Memos as a storage system for Voice Memos and not deal with the files, which for some of us is not a great pattern. There are other comments that show a solution for newer macOS systems.
@leonhkao
@leonhkao Месяц назад
Not there
Месяц назад
Hey! Read the other comments, this changed a bit in macOS Sonoma. =)
@Aziz-kw6ct
@Aziz-kw6ct Месяц назад
That was very helpful , thank you for this tutorial .
Месяц назад
Hey, Aziz! Glad to hear you found this video useful. Nono. =)
@Alexis.MHP0
@Alexis.MHP0 Месяц назад
Thank you, this is very helpful.
Месяц назад
Alexis! Thanks for letting me know. Nono. =)
@mattydfromsandiego2356
@mattydfromsandiego2356 2 месяца назад
Appreciate you sharing your passion your insights with us. Love all the projects. Looking forward to what’s coming next. Bravo!
Месяц назад
Hey, Matty! Big THANKS for your comment. It makes me happy to hear my passion projects resonated with you. =) Nono
@tecnom7133
@tecnom7133 2 месяца назад
Thanks
2 месяца назад
Hey Tecnom glad you found this useful! =)
@florianschweingruber2963
@florianschweingruber2963 2 месяца назад
Dear Nono, First off, thanks a lot for your work, it's super helpful. Can you elaborate on what's going on when we are downloading the shards, loading the tokenizers and safetensors? To me it seemed like the gemma model I previously downloaded and cached was being downloaded again. Or is that just showing the progress of loading into memory? How can I make sure the local, cached model is being used? Thanks again and all the best, Flo
2 месяца назад
Hey, Florian! The models and shards get downloaded to HuggingFace's local cache in your machine (on macOS, for me, that's ~/.cache/huggingface) and it shouldn't be re-downloaded on every execution. It normally does a fast process to verify the models were already there, and if not it downloads. As you are saying, there's always a loading bar for loading the model into memory. 👌🏻 Nono
@pb2222pb
@pb2222pb 2 месяца назад
Nice information. I just had one quick question, what is the configuration of your m3 max? How many gpu cores and RAM?
2 месяца назад
Hey! I have an Apple M3 Max 14-inch MacBook Pro with 64 GB of Unified Memory (RAM) and 16 cores (12 performance and 4 efficiency). It's awesome that PyTorch now supports Apple Silicon's Metal Performance Shaders (MPS) backend for GPU acceleration, which makes local inference and training much, much faster. For instance, each denoising step of Stable Diffusion XL takes ~2s with the MPS backend and ~20s on the CPU. I hope this helps! Nono
@pb2222pb
@pb2222pb 2 месяца назад
@ thanks a lot! I ordered mine yesterday after watching your video.
2 месяца назад
Great! Let us know how it goes. =)
@kunalnikam9112
@kunalnikam9112 2 месяца назад
In LoRA, Wupdated = Wo + BA, where B and A are decomposed matrices with low ranks, so i wanted to ask you that what does the parameters of B and A represent like are they both the parameters of pre trained model, or both are the parameters of target dataset, or else one (B) represents pre-trained model parameters and the other (A) represents target dataset parameters, please answer as soon as possible
2 месяца назад
Hey! As can be read in this publication (arxiv.org/html/2403.16024v1#S2), dW(A) and dW(B) are the LoRA-trained weights for two possible tasks A and B. W(0) represents the pre-trained weights. So that W(A+B) = W(0) + dW(A) + dW(B), which are the pre-trained weights with the modifications of tasks A and B.
@AamirShroff
@AamirShroff 2 месяца назад
I am trying to run with cpu. I am getting this error: Gemma's activation function should be approximate GeLU and not exact GeLU. Changing the activation function to `gelu_pytorch_tanh`.if you want to use the legacy `gelu`, edit the `model.config` to set `hidden_activation=gelu` instead of `hidden_act`. See github.com/huggingface/transformers/pull/29402 for more details. Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Killed
2 месяца назад
Hey, Aamir! What machine are you running on?
@AamirShroff
@AamirShroff 2 месяца назад
@ I am using intel core ultra 5 14th gen.
2 месяца назад
I've only run Gemma on Apple Silicon so I can't guide you too much. Hmm.
@qwakubenyarko1473
@qwakubenyarko1473 2 месяца назад
Source code please
2 месяца назад
Hey! 1. Install Node.js & npm 2. Install nativefier globabally - npm install -g nativefier 3. Create an app from a website - nativefier "website url here" I hope that helps! Nono
@majdzet9938
@majdzet9938 2 месяца назад
when i type npm install it gives me an error. help
2 месяца назад
Hey! What's the error?
@majdzet9938
@majdzet9938 2 месяца назад
@ when i type npm install it says 'npm' is not recognized as an internal or external command, operable program or batch file.
2 месяца назад
That means you need to set up the Node Package Manager (NPM) on your machine before you can run any NPM commands in the terminal.
@nadiiaheckman4213
@nadiiaheckman4213 2 месяца назад
Awesome stuff! Thank you, Nono.
2 месяца назад
Thank, you Nadiia! Glad you found this useful. =)
@habladoenserio1668
@habladoenserio1668 2 месяца назад
Thanks just what I needed
2 месяца назад
You're welcome! Glad you found this useful. =)
@nanayawasare1
@nanayawasare1 2 месяца назад
Great demo
2 месяца назад
Glad you found it useful!
@nanayawasare1
@nanayawasare1 2 месяца назад
I want to use it for play loops for church services. I hope it can service that purpose
@wibuslayer183
@wibuslayer183 2 месяца назад
Thanks sir for the explanation
2 месяца назад
Hi, Wibu! Thanks a lot for your comment. Glad this was helpful. Nono.
@AgusCraft2002
@AgusCraft2002 3 месяца назад
argentino?
3 месяца назад
¡Hola! Español. =)
@tay.0
@tay.0 3 месяца назад
while Testing V8 in the last 3 years, I get to experience Andy's work firsthand and he's been a great addition to the Mcneel's team.
3 месяца назад
Thanks for sharing, Tay!
@JoshCaiLovzu
@JoshCaiLovzu 3 месяца назад
My assignment asks us to do a flattening operation. But why are we even flattening an image???
3 месяца назад
Hey, Josh. I pinned a previous comment where I answer this question. Copy-pasting here in case you missed it. =) WHY DO WE FLATTEN? The quick answer is because we want to convert images or other multi-dimensional data into one-dimensional linear tensors or vectors that can be used as inputs to the next layer of a neural network (say, artificial neural networks, or ANNs). In convolutional neural networks (CNNs), 'we flatten the output of the convolutional layers to create a single long feature vector. And it is connected to the final classification model, which is called a fully-connected layer. In other words, we put all the pixel data in one line and make connections with the final layer.' [1] As mentioned by @WahranRai in a comment, flattening happens row by row starting from the top row and then going left to right through the cells of each row. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] - - - - - - - › [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 7 ] [ 8 ] [ 9 ] 🔗 References [1] towardsdatascience.com/the-most-intuitive-and-easiest-guide-for-convolutional-neural-network-3607be47480 [2] medium.com/@PK_KwanG/cnn-step-2-flattening-50ee0af42e3e
@JoshCaiLovzu
@JoshCaiLovzu 3 месяца назад
@NonoMartinezAlonso Hey, I'm hugely grateful. Thank you very much!!!! Do you have a course? I'll sign up
3 месяца назад
I have the following playlists but no formal course yet! I really want to build one. =) - Live Streams ru-vid.com/group/PLVz6zdIOM02XcUh-wl0ECneCkvX2YP7tZ - Podcast ru-vid.com/group/PLVz6zdIOM02XW0b81XaCHiCow3UuBdbrg - Machine Intelligence ru-vid.com/group/PLVz6zdIOM02VGgYG_cwmkkPGqLJhUms1n Cheers, Nono
@insecureup
@insecureup 3 месяца назад
¿Tienes un canal en español?
3 месяца назад
¡Hola! De momento no pero me encantaría hacer contenido en español en el futuro. =)
3 месяца назад
💬 Chat with Andy Payne and Nono during the episode premiere. 🎟 RSVP now at lu.ma/gs-andy
@PierreMilcent
@PierreMilcent 3 месяца назад
Running Sonoma 14.3.1 and found that these files are now stored in /Users/xxxx/Library/Group Containers/group.com.apple.VoiceMemos.shared
3 месяца назад
Thanks a lot for the heads up, Pierre! That seems right.
@Morgan-Zolko
@Morgan-Zolko 2 месяца назад
doesn't work
@theyearof1984
@theyearof1984 Месяц назад
I don’t seem them in either. I’m on 13.5
@PierreMilcent
@PierreMilcent Месяц назад
@@theyearof1984 i wrote "Sonoma 14.3.1", which is not your version.
@bao5806
@bao5806 3 месяца назад
"Why is it that when I run 2B it's very slow on my Mac Air M2, usually taking over 5 minutes to generate a response? But on Ollama, it's very smooth?"🤨
3 месяца назад
Hey! It's likely because they're running the models with C++ (llama.cpp or gemma.cpp) instead of running them with Python. It's much faster, and I'm still to try Gemma.cpp. Let us know if you experiment with this! Nono
@nhinged
@nhinged 3 месяца назад
@ can you link gemma.cpp haven't looked google yet but if you can would be nice
2 месяца назад
github.com/google/gemma.cpp
3 месяца назад
↓ Here are the timestamps. 01:07 Introduction 01:21 Today 02:19 Fine-Tune with LoRA 04:09 Image Diffusion Slides 06:43 Fine-Tune with Lora 13:31 Stable Diffusion & DALL-E 22:27 Fine-Tuning with Lora 01:34:20 Outro
@ferhateryuksel4888
@ferhateryuksel4888 3 месяца назад
Hi, I'm trying to make an order delivery chatbot. I made with GPT by giving APIs but I think it will cost too much. That's why I want to train a model. What you suggest about this?
3 месяца назад
Hey! I would recommend you try all open LLMs available at the moment and asses which one works better for you in terms of costs for running it locally, speed of inference, and performance. Ollama is a great resource because in one app you can try many of them. Gemma is a great option, but you should look at Llama 2, Mistral, Falcon, and other open models. I hope this helps! Non
@ferhateryuksel4888
@ferhateryuksel4888 3 месяца назад
Thanks, a lot @
@moman1151
@moman1151 3 месяца назад
This video is amazing. I can't find anything else like it. I only wish the music wasn't so loud. Sometimes it is hard to hear you... :(
3 месяца назад
Glad to hear! About the music, yes, I forgot to turn it off until really late into the recording. =)
@user-sq7ue2dm2r
@user-sq7ue2dm2r 3 месяца назад
Would you be able to share the libraries from your project's requirements.txt?
3 месяца назад
Hey! This is what `pip list` outputs in my environment. But what really matters is that the environment is running Python 3.11.7 and that I installed the following libraries: accelerate transformers pytorch. I hope it helps! Nono › pip list Package Version ------------------ ---------- accelerate 0.27.2 certifi 2024.2.2 charset-normalizer 3.3.2 clip 1.0 filelock 3.13.1 fsspec 2024.2.0 ftfy 6.1.3 huggingface-hub 0.20.3 idna 3.6 Jinja2 3.1.3 MarkupSafe 2.1.5 mpmath 1.3.0 networkx 3.2.1 numpy 1.26.4 packaging 23.2 pillow 10.2.0 pip 23.3.1 psutil 5.9.8 PyYAML 6.0.1 regex 2023.12.25 requests 2.31.0 safetensors 0.4.2 setuptools 68.2.2 sympy 1.12 tokenizers 0.15.2 torch 2.2.1 torchvision 0.17.1 tqdm 4.66.2 transformers 4.38.1 typing_extensions 4.9.0 urllib3 2.2.1 wcwidth 0.2.13 wheel 0.41.2
@user-sq7ue2dm2r
@user-sq7ue2dm2r 3 месяца назад
@ Thanks for this information. I created exactly same environment but for GPU version upon calling generate, it's returning same prompt as output (nothing more), whereas this works perfectly fine when I use CPU code. Here's my GPU code.. from transformers import AutoTokenizer, AutoModelForCausalLM import time start = time.time() tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it") print(f"Tokenizer = {type(tokenizer)}") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", device_map="auto") print(f"Model = {type(model)}") input_text = "Tell me ten best places to eat in Pune, India" input_ids = tokenizer(input_text, return_tensors="pt").to("mps") print(input_ids) outputs = model.generate(**input_ids, max_new_tokens=300) print(outputs) print(tokenizer.decode(outputs[0])) end = time.time() print(f"Total Time = {end - start} Sec")
3 месяца назад
What macOS version are you running on?
@user-sq7ue2dm2r
@user-sq7ue2dm2r 3 месяца назад
@ - It's Sonoma 14.4
@breadjesh
@breadjesh 3 месяца назад
hello. Im trying to do some work using YOLO, The predictions will be saved in a txt file in a folder on colab. is there any way i can make it automatically download the file using python or something? ideally, anytime a txt file is added to the folder, i want it to be dwnloaded
3 месяца назад
Hey! Not sure about doing that automatically as a trigger in Colab. But if you can add code right after the txt file is created, you can use the code shown in this video.
3 месяца назад
↓ Here are the timestamps! 00:17 Introduction 02:30 Today 04:17 Draw Fast by tldraw 06:15 Fal AI 07:20 Hands-On Draw Fast 08:03 What is Draw Fast? 10:09 Clone Draw Fast 14:16 Fal AI 15:04 Sign Up 16:41 API Key 20:17 Pricing 21:55 DEMO 25:55 Credits 28:03 Models 30:57 DEMO 37:59 Challenge 41:27 Break 44:42 Tldraw React component 49:23 Draw Fast Code 01:05:50 Outro
@sanjay6667
@sanjay6667 3 месяца назад
How can use gpu from docker
3 месяца назад
Hey, Sanjay! Certain systems support GPU-usage inside of Docker, and there are images which are prepared already for that. But it depends on your system requirements and it's not a simple answer.
@user-zi3qg9zq8p
@user-zi3qg9zq8p 2 месяца назад
May be it helps someone, basically in most primitive way you can give the access to the video driver from outside by adding it in docker compose, this approach is described in docker docs, just google - Turn on GPU access with docker compose, I ran this way tfgpu image without any mambo-jambo commands and installations
@mmenendezg
@mmenendezg 3 месяца назад
Great video, great work with the visuals and really useful tutorial. 5 stars ⭐
3 месяца назад
Hey, Marlon! Thanks so much for your kind words. I'm happy to hear you found this useful. Nono
@vaishusekar408
@vaishusekar408 3 месяца назад
flat_arg_values = tree.flatten(kwargs) AttributeError: module 'tree' has no attribute 'flatten' Im getting this error , how can i rentify it please tell me
3 месяца назад
It seems the tree module doesn't have a flatten method. chat.openai.com/share/1a2f08c0-f1d1-4d1d-9d35-0c3a2f9a52ba
@user-xv8ru8ny2v
@user-xv8ru8ny2v 4 месяца назад
Thanks for this! WOndering is screenshare is available without the connect subscription?
3 месяца назад
Hi, Yawar, it seems you only need to pair your reMarkable to the cloud, according to their help page at support.remarkable.com/s/article/Screen-Share. But it's 100% clear, so you may need to try it first, if you already have a tablet, of course. If you where thinking of this as a decision-factor before buying, we'd need to dig a bit more. Cheers, Nono
@cloudby-priyank
@cloudby-priyank 4 месяца назад
hi i am getting this error Traceback (most recent call last): File "C:\Users\Priyank Pawar\gemma\.env\Scripts un_cpu.py", line 2, in <module> from transformers import AutoTokenizer, AutoModelForCausalLM ModuleNotFoundError: No module named 'transformers' i installed transformer but still getting error
4 месяца назад
Hey, Priyank! Did you try exiting the Python environment and activating it again after installing transformers?
@alkiviadispananakakis4697
@alkiviadispananakakis4697 4 месяца назад
I have followed everything. I get this error when trying to run on GPU: RuntimeError: User specified an unsupported autocast device_type 'mps' I have confirmed the mps is available and have reinstalled everything
4 месяца назад
Hey! If you've confirmed mps is available, you must be running on Apple Silicon, right? If you are, and you've set up the Python environment as explained, can you share what machine and configuration you're using? I've only tested this on an M3 Max MacBook Pro.
4 месяца назад
Other people have mentioned the GPU not being available in macOS versions previous to Sonoma. Are you on the latest update?
@alkiviadispananakakis4697
@alkiviadispananakakis4697 4 месяца назад
@Hello! Thank you for your response. Yes i have an M1 MAX with the Sonoma 14.3.1. I also tried all the models in case there was an issue with the number of parameters.
@drjonnyt
@drjonnyt 4 месяца назад
@@alkiviadispananakakis4697 I had the same issue on M2 Pro. I just fixed it by downgrading transformers to 4.38.1. Now my only problem is it's unbelievably slow to run!
4 месяца назад
Nice! The only thing that may be faster is running gemma.cpp.
@tigery1016
@tigery1016 4 месяца назад
Thank you so much! I was able to run gemma-2b-it. great model. love how google is releasing this open source, rather than closed-source (unlike ClosedAI's ChatGPT)
4 месяца назад
Nice! Happy to hear you were able to run Gemma. =)
@tigery1016
@tigery1016 4 месяца назад
im still on monterey, so gpu doesnt work yeah cant wait to update to sonoma and use the full power of the m1 pro@
@filipepaganelli
@filipepaganelli 4 месяца назад
thnk you very much
4 месяца назад
Hey, Filipe. Thanks for letting me know. Glad this helped! Nono