@nikisaraa I see. Have you tried viewing the camera footage in a dark room? If it’s like a regular USB camera and you can’t see anything in the dark, then either the night vision camera is broken or it’s not a night vision camera at all. This is a hardware issue with the camera, not a software problem.
Hi. Thanks for the video. I want to do the same but using different targets on the same link and implementing Image Tracking. Each image showing a different video. It is possible? Can you explain how to do it? Please. Thanks a lot. I already subscribed to your channel!
Thank you for subscribing to my channel! You want to place multiple markers on a single AR web application and have each one play a different video, right? After some research, it appears to be possible. The method involves registering multiple A-makers. It's also possible to create a tutorial video, but I don't have the time to start on it right now. If you're in a hurry, searching for 'a-frame maker multiple' should bring up some helpful websites.
@@noelrecords-programming Thanks for answering!. Yes, that is what I want. I was trying and I think I got it. The only problem is the stability of the video with the marker, but I think it can be solved. Anyway, your video helped me a lot.
I checked the "remix the starter example on Glitch" again, and indeed it has 25 lines. However, upon careful examination, it's clear that the content is the same. The only difference is the placement of line breaks.
My project shows “Unity has not started sending image data [Capture Device #1] After I done for several days, do you know why ? And how to solve it ? Thank you 😢
@@noelrecords-programming but I just followed your steps. It works at the beginning, and after a few days , I look into this project again, it cannot show the preview , and the code is no longer valid 🥲
I get that everything was running smoothly at first, but then it stopped working after a few days. I’m actually developing AR using A-Frame, AR.js, and Glitch, and it’s functioning without any issues. It’s likely that some changes were made to the code or your PC or mobile device. I understand it can be a bit of a pain, but I’d recommend going through each step again while watching the video. Keep in mind, if you’re following the steps in my video and using A-Frame, AR.js, and Glitch, Unity isn’t involved, so the error you’re experiencing shouldn’t occur. If the error persists even after revisiting the video and checking the code, it would be helpful if you could share the Glitch project you’ve created, if possible. That might give me some insight into what’s happening.
Thank you for your comment. I haven’t come across this issue before. Have you tried it on a different phone? If the problem persists across multiple phones, it might be an issue with the program itself. However, if it’s only happening on your phone, then the issue might be with your device.
If the issue is exclusive to the Samsung browser, it would be best to consult with an expert. Alternatively, if it’s possible, you might want to consider using a different browser, such as Chrome.
toolchain create のコマンドを実行したら、最後の3行に以下のようなエラーが起こります。 [ERROR ] No python recipe compiled! [ERROR ] You must have compiled at least python3 [ERROR ] recipe to be able to create a project. どんな原因が考えられますか?
I need my app to read the value from QR-code that is placed on market and after that to place an object on top of it. Can you tell me if it is possible to do with AR.JS? Great video, thank you!
You cannot create AR with QR codes that were not made with Marker Training. However, I believe the app you want to create can be realized with Marker Training.
pyenvを使った上で、さらにvenvを使えば、 更に柔軟で細分化されたバージョン管理が可能 という認識でよろしいのでしょうか? venvとpyenv違いというか使い分けとして、 両方とも異なるバージョンのPythonがインストール出来るのですが、 pyenvの方の用途というか主なテーマは、 使用するバージョンの継続利用が確定している場合など、 どちらかというと、安定した複数の環境の共存でしたが、(←一時的な仮想環境みたいに消さないけど、いつでも切り替えやアンインストールが超簡単!) それに対して、venvの方は、テスト環境としても使えますが、 どちらかというと、使われ方としては、ライブラリの構成や環境がこじれても後でどうなっても良いような 不安定で、あんまり頻繁に使わない環境だし、一時的に使うだけで、環境を後でなるべく残しておきたくない様な時などに いわゆる、「捨てアカウント」ならぬ、「捨て環境(仮想環境)」の意味で使われるんですかね? ちなみにその後、更に高機能ということなので、気になっていた「pipenv」をインストールしようとして、 まず、pyenvをインストールした後に、pyenvでpython 3.10.6をインストールして、 あえて何もバージョン指定せずに、pipコマンドで、pipenvをインストールしたところ、 「 > pip install pipenv 」 ※pipenvをインストールを試みる 「No global/local python version has been set yet. Please set the global/local version by typing:」 「> python -m pip install --upgrade pip」 ※pipがあればアップデート 「Python」 「> pip list」 ※そもそもpipがあるか確認 「No global/local python version has been set yet. Please set the global/local version by typing:」 pipenvをインストールする前にまず、pyenvのglobalで使用するpythonのバージョンを指定しないとダメなんですね。 改めて、先ほどインストールしたpython 3.10.6のバージョンを指定したら無事にpipenvがインストール出来ました。 ということは、pipenvもpyenvでインストールした各pythonのバージョンごとに 少し面倒くさいですが、1個ずつそれぞれインストールしないといけないっていうことになるんですね。
Hi, thank you for watching the video! Are you talking about generative AI like Stable-Video-Diffusion? I have never made a video from a photo using generative AI, so I can’t make an explanatory video yet. However, those generative AIs seem interesting, so I would like to try them if I have a chance. When that time comes, I will make an explanatory video, so please subscribe to the channel and wait.
When using a smartphone camera for augmented reality (AR), camera shake is often an issue. Currently, there are a few strategies to mitigate this problem: Use a Tripod: Consider using a tripod to stabilize your smartphone. A tripod provides a stable base and minimizes hand tremors during AR capture. Opt for Phones with Effective Image Stabilization: Some smartphones come equipped with robust image stabilization features. These can significantly reduce camera shake, especially during handheld shooting. Be Mindful of Lighting Conditions: Adequate lighting is crucial for clear AR images. Avoid low-light situations, as longer exposure times can exacerbate camera shake. Remember that while these strategies can help, achieving optimal stability in AR photography often requires a combination of techniques and practice.
@@noelrecords-programming But the issue is not on the cell phone, but rather the image that appears in AR vibrates as if it cannot accommodate itself to the surface and is seeking to balance itself in one place. In your example it happens 0.22 sec, (what you do is still great), but I wanted to know if a code can be added so that the image in AR remains more stable and is not trying to rearrange itself all the time. I hope you explained it to me. THANK A LOT AGAIN
I see, it’s very easy to create AR using A-Frame and AR.js, but as far as I checked the documentation, there doesn’t seem to be an option to stabilize without vibration. There might be a technique to stabilize it, but I don’t know. I’m sorry.
ChromeOS Flexの公式ページを確認しました。 13インチMacBook Pro Mid 2012は認定モデルとなっているので問題なく起動するはずですが、それ以外のMacBook Pro 2012は認定モデルから外れています。 なのでうまく起動しないかもしれませんね(^_^;) もし13インチMacBook Pro Mid 2012でうまく起動しない場合は、、、私はそのモデルを持っていないのでなんとも言えません。。。 もしわけないですm(_ _)m
In this video, I'm using UTM in a simulator. If you are using UTM SE on a real device like the M1 iPad Air, you might find it slow. UTM SE stands for UTM ‘Slow Edition’. So, you might find it faster if you use UTM instead of UTM SE. However, in that case, you will need to jailbreak your iPhone, or install UTM using a third-party app store like Altstore. Even if you are using UTM and find it slow, there might be nothing you can do about it.