Unscripted Coding is entirely unscripted and organic. If you're looking for a tutorial, you're in the wrong place! What we do here is try and build something from scratch, filled with all the mistakes and lessons learned along the way!
This is interesting. I feel like about a year ago Models would just stop, and you could ask them to continue. But I think that was viewed as a "Bug" and not a feature. Now I feel like they (the models) have been trained to make a 'complete' response, but as you pointed out, it may not be a complete as you want it to be. Thanks for the great tip! I will be trying that out!
They got rid of that pretty fast. You get to control token output length for most models so implicitly you're telling the model how long of a response you have. Most use cases today don't require something super long, pair that off with the fact that the models aren't designed to output very lengthy replies, it's a sensible decision imo.
I personally love it for reading manga and PDFs at night, it's been fantastic an easy on the eyes. With the eyemoo, there is a sweet spot, so if you stay there I'd say it's very similar to eink, if you're out of it, then you might feel a bit of strain because the light isn't even across the screen. That's a big caveat, just the poor design of a first generation device, but the technology itself seems great to me.
The field is called photobiomodulation. Research was originally done with lasers but has been shifting to LEDs in recent years. There is a comprehensive google docs spreadsheet showing the ~8000 papers that have been written on the topic; google on "photobiomodulation spreadsheet" to find it. It's an emerging field; there were probably about 6000 papers when you made this video. IMHO, the finest intro paper is a science review paper, "Melatonin and the Optics of the Human Body" (2019; www.melatonin-research.net/index.php/MR/article/view/19 -- you can download a PDF of the paper from that webpage). The paper outlines both a positive health impact on our mitochondria from red/NIR light and it explains how we have managed to remove essentially all infrared light from our homes. Visible-spectrum indoor LED lighting was instituted with no examination of potential health impacts on our biology; removing all infrared radiation from our lighting was a terrible idea. It wasn't flat-earth thinking, but it was narrow-minded narrow-spectrum thinking. Fortunately, it's easy to sprinkle a few custom screw-in bulbs that provide Red/NIR lighting. It's certainly possible for manufacturers to add an LED chip or two in that spectrum to visible-only LED light bulbs to provide that fuller spectrum cheaply to the masses. One astonishing thing from the Zimmerman paper is how far NIR light will penetrate the body: at least 6 to 8cm. We were dumb lucky that incandescents provided much infrared radiation. Today; we can have both cheap and energy-efficient fuller-spectrum indoor lighting. We just need a little bit of [invisible] enlightenment. You selected a premium brand of RLT panels to compare your DIY project to. There is all sorts of functionality that panel provides: flexible light output, Bluetooth control, an iOS/Android app, third-party testing, UL certification, customer service, and a warranty. And there is a middle ground: you can buy appliance from Alibaba today -- many for well under $100 and some for under $50. I'd love to see a portable device from US manufacturers that used USB-C for power for somewhere around $50; I think that's quite doable. It would be way cool for little panels to be available from the local Dollar Store. My main anger/frustration is that it never occurred to our government to check out the health impact of removing infrared radiation from our indoor lighting. That could have been studied 20 years ago. Infrared LEDs have been around forever: they were the first LEDs and were ubiquitous in TV remote controls for decades. Further, there has been no action sindce that Zimmerman paper written 5 years ago. This is a paradigm shift, but it's really not difficult science. I recently saw Neil deGrasse Tyson's dismissal of incandescent bulbs on his StarTalk channel. Dissing those hot and inefficient bulbs is good. However, NDT failed to realize that infrared radiation is essential to our health and that occupants of a home lit with visible-only LEDs are probably starving for infrared radiation. He clearly didn't research photobiomodulation -- anything about Red/NIR science -- before cutting that video. He usually does great science on his vids, but he whiffed on that one. Thanks for your video. I explicitly searched for DIY infrared projects, and I was satisfied when I found your project. I wish that MAKE had a writeup on NIR LED lighting -- maybe with a USB power supply. We definitely need more light and less heat on this topic. I guess that's not the perfect phrase to use, but I think you understand.
Yea I think the benchmarks show a jump, but in real usage, it's been incredible. The last few videos I recorded (upcoming), I basically asked Claude to start and I sit there stunned because it just built everything in one go.
I'm using a pro plan, and I use the API, depends on what I need it for. I stopped ChatGPT Pro, trying to stick with just one paid plan and for now, Claude is the way to go.
For coding, hands down Claude 3.5. The part I really miss for ChatGPT is internet search. I think the plugins for ChatGPT like image generation are also done better, and the whole voice bot might bring me back to ChatGPT in the future. But right now, coding with Claude is pretty unreal.
New github have other fields and i cannot find best region selection and also east US is denying due to multiple load on also choose other locations i am not able to create cosmodb, cau you give some info or video of latest github work flow. Thank you
Thanks for the info! great tutorial. In case somebody has issues with the workflow build, mine kept giving issues when I copied the AZURE_CREDENTIALS and AZURE_APP_SERVICE_NAME from the docs. When I typed the titles in the repository secrets manually, for some reason it worked fine.
Nice I will check this out. RAG is so unreliable, I wish there was also a memory function for the agent. Have you tried AnythingLLM, NotebookLM, Verba, Ragflow. The one which worked well for legal documents is Perplexity based on one use case. I wonder for Cohere if it is the embedding method, the chunking, etc that makes it effective and what are the PDF ingest quantity limitations.
What a crappy video! You try to explain something in this video - but you don't know where to upload files, for example ... lol ... well prepared to record an explainer video
Good question. In my personal use, Cohere is significantly better at retrieving information from documents than GPT and Gemini. It lacks some of the creative flare and answers a bit more robotically, but fetching information out of a PDF and doing something with it, hard to beat Cohere in my opinion.
You're not alone; Perplexity's API is terrible. Not only is their documentation bad and the descirptions of each model non existant, but the output is nothing like the Perplexity "Pro" search. It's almost as if the API only searches the first result and summarizes it in less than 50 words, whereas Perplexity Pro searches 10-20 results and is closer to 1,000 words. Pretty lame overall; it would be very powerful if we had a real API by Perplexity. I see some unofficial wrappers and scrapers which I am tempted to build out; but that's just lost revenue by Perplexity. Amateur hour!
Thanks! It's not the greatest video in my collection but sometimes I think it's helpful for people to see disappointment in real time lol. No idea why they're still dancing around a properly functioning API, they seem to have good funding too. Check out Cohere's Command R, they have a built in web search in their API, albeit it doesn't work as well as Perplexity's interface.
Thank you! Just as I was looking for a solution for that problem. Do you know if there is a AI Builder that lets you summarize page by page but has a UI so non coders can easily use it? I want to chunk a long pdf into pages and let it summarize each page to get a compaction factor of maybe 4 pages into 1. I just saw your code. This works for relatively short articles. For long content like a pdf file or a book with hundreds of chunks, your code will lead to a token explosion and run up quite the API costs. Not recommended. Instead it is more recommended to send an overlapping content. So chunk 2 contains the last 20% of chunk 1, in order to give the AI more context. This way your overall costs is only +20% higher than having no overlap, and not exponentially growing like using your method.
We need to create a prompt generator , I have already done with openai . But it does not create print really well with api . So I think if this work it would be really ok .
11/10. Absolute superior tutorial. No flashy youtube clickbait or effects - Check Very detailed explanation from start to production (that even a complete beginner would be able to follow) - Check Shows example of errors and how to solve - Check
Hey Guys. I bought an eyemoo S1 4 days ago and they haven't shipped it yet, They don't respond to my emails either. Do you know anything else, is it a scam? What do you recommend I do?
Hey, I wouldn't be too worried about it at 4 days. I get the feeling the Eyemoo team is pretty small and frankly a little unprofessional. They stink at replying to emails for example, but I don't think they're a scam.
If you can find someone who speaks Mandarin, they respond quickly on their Mandarin Facebook page. I bought one through their Chinese website Nov 2023 and communicaion was quick, but email communication in English doesn't seem to go through.
That's great to know! I'm really annoyed by their stylus bug, it's the one thing holding it back for me personally, so I just sent them a quick message. Thanks!
have you work with assistant in Azure OpenAI? And if so can you make a video like this showing how you set one up and deploy it? Thanks again you video are very inspirational.
hey ive been watching your videos, did you tried LiveShare? You may want to open the Liveshare workspace, share the terminals and ports and connect via link, from this way you can have a pc with vscode project + docker(i.e) and manipulate files too, hope i could help
For the requirement of knowledge base, I think Open-WebUi is better than AnythingLLM on parsing the document which I had given. The demo video for your reference. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-iEznNgEalSQ.html
For Anythingllm, I have the same intended purpose as you. I think Anythingllm can build a personal knowledge base. Unfortunately, I encountered the same problem as you. Anythingllm can't help me to find the correct description in the document.
great video 👍🏻. I am curious about one thing tho, lets says if a pdf doesn't have fillable fields, is there an option to tackle situation? also what if the user had to sign? Would appreciate your reply.
Hey there, both of these get tricky. For non fillable fields, you'll use the same libraries but basically overlay on top of coordinates, not bad, but you'll have to know your PDF very well. For signatures, you'll likely want to use something like DocuSign's python SDK. Haven't ever used it before, but I wouldn't recommend using a library that isn't from a big eSignature provider.
@@UnscriptedCoding Thanks for the reply. I was making a personal project that would allow user to sign stuff and edit like you showed. Are you saying to use a "big eSginature provider" for legal purposes?
@@khizershareef5076 Ah yes I am. If you want something legally binding that's solid, I'd rather go with DocuSign or Adobe Sign. If it's just a hobby project, then no matter!
This is a cool project. I would like to know how to keep up with the original Microsoft chat functionality when the original open source project adding images , and other more advanced RAG features, and we don’t have to do too much modifications.
Can you pinpoint me to a tutorial how to do this myself? I own a iPad Pro with a Lidar Sensor and made some scans and now I want to load them into my Meta Quest 2.