One issue that is missing from the tutorial is the fact that the ZoomDetection coroutine would NOT be canceled if only the FIRST finger is released (it will be canceled only if the SECOND finger or BOTH fingers are released). To fix this - introduce the StopCoroutine() call into the "canceled" event of the Touch#0.TouchContact.
Thanks for the tip! For anyone following the tutorial wanting to include this, they can first add a PrimaryTouchContact action to the Input Action. It will be similar to SecondaryTouchAction but with Touch #0. Then they can add the ZoomEnd() method to the canceled action.
For best results, when declaring previousDistance in the enumerator, have it also default to the same Vector2.distance as in the while(true) loop. Otherwise your zoom will always shrink upon the first touch.
Can you make tutorials on Data structures used for games and make on android Inapp purchases monetizatio and how we can use URP in android games if you do it am very very very thankfull to you or if you not however i love your other stuff
2 Little ways to improve this system aside from what I've seen in the other comments, Multiply the speed of your zoom in/out by the Mathf.Abs(distance - previousDistance) so that it zooms with the speed of your fingers, feeling more natural and also adjust the way the camera zooms out depending on where on the screen your fingers are pinching. This is my code for it Vector3 direction; direction = Camera.main.ScreenPointToRay((primaryFingerPosition + secondaryFingerPosition) / 2).direction; Vector3 targetPosition = direction * -1 * speed * Mathf.Abs(distance - previousDistance); cameraTransform.position += targetPosition; Now with the way I do it, I don't account for Time.Deltatime because whenever I do that, it tends to make everything really really slow and so for the time being I'm just experimenting with things, but if you want to add it, it'd go after Mathf.Abs(distance - previousDistance) :) Edit: Remove the "* -1" depending on if you're zooming in or out :)
Thank you so much. I watched your other video on Swipe Detection, and for my game it wasn't giving the desired result, since the swipe was detected only after touch ended. I combined the technique in this video and the Swipe one to make a much faster and real time Swipe... Thanks so much for your videos.
Hello, thank you for your work. I've noticed that you create Actions, but the way of how you use it makes them not the actions but some inner data. I thought that Actions should be like "Jump" or "Zoom" or "Slide", when whoever can subscribe to this actions and gets the same values no matter what controller was performed (if supported by action). But instead you create actions as some value change events and then write script to handle those value changing.
On more thing, I think you''ll want to initialise the previous touch distance before you set the zoom start corutine. Other wise the camera will do a weird zoom as soon as the second finger touches the screen.
Hi! Great video, thanks! You show, how can we use pinch to operate with the Camera, but there is another side - we can scale the Object, correct? And in that case we shouldn't move the camera, but should add the script to Object. I wish to make an app, where I could upload the images, then move and scale them. Would you like to make a tutorial something around it? THANKS!
Whats is a trick is that if you have two finger on screen, it will fire the secondFinger but if you remove the firstFinger and keep the second, it keeps firing secondFinger.Position. How we can try to solve it?
These tutorials are awesome @samyam. I've watched a heap now - you've got my subscription. I feel like I'm missing something here with the new input system. It's awesome that I can bucket say a mouse press and a touch press into the same interaction to trigger an event (for example). But having done that, if I want to know where on the screen I touched (or clicked) then I've got to work out query which method I used, and then query that device. Kind of seems counter to the point of bucketing them in the first place. Am I missing something?
I have a little question. How do I restrict the first and second touch. So I created a container (type = Image), the class inherits the interface "IPointerDownHandler" and with the method "OnPointerDown ()" I get the first touch position and how do I get the second touch? Your method worked fine too, but I have the problem. That I want to restrict the user and the first and second touch should be triggered within a container and not across the entire screen. 😞 Do you have any ideas? 🤔
You can either use raycasts to cast the finger down position and determine if it touches that object, which I have a video on raycasts here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-JID7YaHAtKA.html Or you could possibly cache the previous on pointer down and release it onpointerup and wait for the next onpointerdown (2nd finger) answers.unity.com/questions/884262/catch-pointer-events-by-multiple-gameobjects.html
Hello, I have a question, I have a zoom in / out function that works through the mouse scroll and everything is working perfectly, how can I add this pinch function to the input bindigs that I created? Thanks!
So how would you allow a finger to press multiple buttons at the same time by sliding over them? The implementation is I am trying to make a mobile joystick for movement on mobile, but there is also a run button above it, and if you slide your finger high enough while also dragging on the move joystick it will engage the running. Please help!
Hey im not understanding what the best way is to use swipe detection with the new input system. My implementation works but only if that swiping finger is the only one touching the screen. How do i make my swipe detection able to accept multiple touches. Preferably without using the update function as ive done well so far this project to have very little in update functions
I watched your videos over and over again but I keep wondering if it's possible to use pan, zoom, touch etc., In the same project. And I just don't understand how to do that. I have two scripts: one makes camera zoom in and out(thank you again for your tutorial) and the other one rotates an object. They work separately but don't work togeter. When I switch them both on, only rotation works T-T
Yeah it’s possible! Feel free to join our Discord (link in description) and ask for help there (some details on your problem and how you are trying to solve it).
Hi, great tutorial! I have one question: How to prevent the zoom behaviour when my fingers are touching / interacting with UI buttons? I dont want to zoom in/out when the fingers are interacting with UI things. Thanks.
You can add a public bool that gets enabled when ever you don't want zoom. For example: if (!zoomDisabled){ zoomCoroutine = StartCoroutine(ZoomDetection()); }
Hello again! Watched your video just for the orthographic camera and I can't seem to figure out how to do it. Made a variable "private Camera mainCamera;" then in awake function did this: mainCamera = Camera.main; and in if statement of ZoomDetection did this: Vector3 targetPosition = mainCamera.transform.position; targetPosition.z -= 1; Camera.main.orthographicSize = Vector3.Slerp(mainCamera.transform.position, targetPosition, Time.deltaTime*cameraSpeed); and I get this statement on "slerp" part: Cannot implicitly convert type 'UnityEngine>Vector3' to 'float' What did I do wrong?
The secondary touch is called for me even though I only touch the scree nwith one finger for some reason, but in another class it detects it as not being pressed.
Once the second finger is pressed down, get the vector of the difference between the first and second finger position, and the middle of that vector will be the center between the two fingers. Then you can move the camera in that direction and zoom in as an example.
Thanks for this great tutorial ! Is it possible to rotate the camera with the pinch (keeping the zoom/unzoom behaviour) ? And how can I do that easily ? The documentation about the new input system is not really complete 😅
It’s easy to get it set up quickly, but i found that it deletes the stuff I set very frequently in the inspector, ultimately I think the best way is generating the class and making your own input manager
Yes should be the same code mostly, just for the zooming instead of altering the camera directly change the Cinemachine value either the field of view or adjusting the z position of the camera (depends on the type of cinemachine camera you are using)
What would you recommend for testing / debugging multiple touch in windows?¿ The zoom coroutine never stops for some reason and it is terrible to debug without reproducing it inside unity D:
Try using breakpoints and stepping through the code, you can connect your phone to the unity editor and have it use the breakpoints - it's called Remote Debugging ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ICh1ZEaVUjc.html
You say the touch is indexed from #0... but what is primary touch? Isn't that finger 1, then #0 is second finger? Also could you link the documentation that you're using please? I'm not able to find it.
Primary touch is the first finger that touches the screen, aka Index #0. Second finger that touches the screen is Index #1 Docs docs.unity3d.com/Packages/com.unity.inputsystem@1.0/api/UnityEngine.InputSystem.InputActionType.html#UnityEngine_InputSystem_InputActionType_PassThrough docs.unity3d.com/Packages/com.unity.inputsystem@1.0/api/UnityEngine.InputSystem.InputActionType.html#UnityEngine_InputSystem_InputActionType_PassThrough
Is there any extra setting you need to do in order to get the autocomplete functionality you are getting in Vs code? I installed it (along with the .NET sdk, C # package, build tools, .NET extra packages..., unchecked "Omnisharp: Use Modern Net" in the settings in Vscode, and set it up in Unity as default editor + restarted the computer and editor a couple of times) and it doesn't give me autocompletes for On disable and On enable. Maybe this is irrelevant, but it freaks me out whenever I am following a coding tutorial to see that my autocomplete is not working hahaha D:
I have a video on it! Set up Visual Studio Code with Unity and INTELLISENSE WORKING 2022 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ihVAKiJdd40.html
Do you mean prioritizing zooming? While it’s zooming you can set a zooming boolean and ignore other inputs while zooming (and under the boolean after the zooming is finished)
The zoom doesn't work for me. I attach the created script to an empty GameObject and Build the project to my phone. But when I try to zoom, nothing happens. And I have no errors or anything shown.
Try attaching your phone to your computer and set it up with the unity debugger I show how to use the Input Debugger here for debugging the input system values: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ICh1ZEaVUjc.html
@@samyam I have installed the asset that shows me the console on my phone also, after building to it, but I don't receive any error. I did this tutorial one time and it worked. Now I do it the second time and it doesn't work..
Sorry I couldn’t be of more help! It’s hard to tell without seeing the project. If you’d like you can join our Discord and ask there with more details and your code in the #help channel discord.gg/B9bjMxj
I have an example video here Get Object from Mouse Click and Call Functions through Interface - Unity 2020 Tutorial ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-JID7YaHAtKA.html
What Input System version are you using? You can also use the Input Debugger to see what values are being returned more easily ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ICh1ZEaVUjc.html
Once the pinch detection is complete you can destroy the game object there. If you want to get the object a finger is on top of you can use raycasts ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-JID7YaHAtKA.html
@@samyam hey sam. I'm trying to pinch a specific gameobject to destroy. but when I touch the screen with two fingers all gameobjects are destroyed. how can i fix this?
I mean changing the resolution in mobile games, because I tried to make the settings but I couldn't, and when I played Genshin Impact I saw that in the settings there is a render scale that reduces the render scale in the game so that it improves the game's performance.
Very adaptable to different types of controls and easy to set up and customize! One central place to change your controls no need to go digging through your scripts
Try unchecking the additional components (install them later). This thread might help forum.unity.com/threads/editor-download-failed-incomplete-or-corrupted-download-file.519098/
@@samyam yeah after writing my comment i tried the same, i unchecked android and ios support then it worked, but still Thank you for your reply, you are doing a great job❤️❤️👍👍👍
Thanks for this tutorial! I have been tinkering around with pinch detection using modifiers in the new input system and your video inspired me to make my own tutorial (in case you are interested: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-LAkQzT_a0zQ.html). Keep up the awesome work!
@@samyam On suggestion for you if want to do please make Vlogs of you and on gaming industry and discuss indie developers specially make your vlogs ...if you can
This is bad. Instead of just using update function you generate several more branches into ienumerator. This is worse than just using update. Much worse in fact. It would've been better have you used a task on a separate thread, but you're not doing that. Branching would look like this > monobehavior > update > checkCooroutines > go into couroutine on yield return > execute > exit coroutine > exit update cycle > exit monobehavior as opposed to monobehavior > enter update > execute > exit update > exit monobehavior
Thanks for the input. I understand your point, but for this case we are only using one couroutine and not spawning multiple on each frame. Also, since I did not declare an Update function in the script it will not execute the Update. www.jacksondunstan.com/articles/2981