reat video, thank you! but I want to ask what do I do if I want to set the sampler type to sample the virtual color when sampling a texture or sample the virtual color directly in hlsl?
Hello, thank you very much for sharing the tutorial. I have a small question. I put it on the box. Except for the top surface, which is displayed normally, there are problems with the display of other surfaces. May i know what is this all about?
Hey, your approach for this effect look really clean! I‘ve made a similar effect, also with animations for each layer but mine looks a bit more messy. But I have a problem, I just can’t solve, since I work on this effect and maybe you have an idea: To make this effect work, we have to sample a texture with slightly different UVs each time. But in our project, our height maps are no textures but big and heavy functions to generate a non tiling height map. As far as I see it, I can handle this problem in two ways: 1. we can’t use parallax (we currently use bump offset) 2. we have to replace the texture sampler with our heavy height function and we have to sample it X times (= killing the performance). But seeing your clean approach of a parallax effect makes me wonder, if you might have a magical third way, to solve this problem? 😅 i would be soooo much thankful
This looks great! One thing it may benefit from would be if you could explain how to include the surface normal with this calculation. As this will work with a plane flat to the ground, but won't work if the plane is angled (or more specifically, it 'will' work but not as expected with the repeats going further 'into' the surface). I haven't tested but my guess is the real angle you want is the delta between the camera vector and the surface normal?
@@saege1173 I haven't, I didn't try though. I think it will just be a matter of incorporating the surface normal in the calculation to find the difference, rather than just assuming it's 0,0,1
I wonder if you could use this to add depth to a fire, like a fireball. I've yet to find a fire effect (excluding real-time) that doesn't break when looked at from above or below.
Hey, Thank you a lot for the tutorial! I've tried it here as well and there is something very weird: I had to convert the CameraVector*-1.0 from World Space into Tangent Space. I really can't tell why your code is working as you're offsetting the UVs in World Space. Would you be able to give some insights why it is working? Is there any special setup in the material?
First off I'd like to thank you for being one of the few who touch on the hlsl side in ureal. That being said, this example only works on a flat surface pointing up and aligning itself to unreals default axis (breaks when rotating the surface). Would it be possible to expand on this by doing a tutorial that works on arbitrary non flat surfaces, being transformed in the world (or moving)? I am having trouble figuring out how to get this working.
Take your Camera Vector, connect it to a Transform Node, Transform From World To Tangent, connect that to your viewDir. That will use the Normals of the Face for the RayMarching so Its aligned to the surface of the object.
@@renderbucket I was not prepared for the rather extreme angle warping and even more pronounced foreshortening issue at low glancing angles. Seems one needs to be very conservative with the depth when using this in tangent mode.
that stuff is sick...have no idea about hlsl code but the problem I always have you need texture objects to plugin these custom nodes...is it possible to use noise or any other texture sample parameter?
Parallax Occlusion Mapping is awesome, have a video on that in Unreal here. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-jrJP__JRjEY.htmlfeature=shared
I always find your videos lacking in theory, it would only take a minimal amout of drawing to show how this work but you never bother, instead presuposing that anyone watching already understand the theory that you show the implementation of.
Thanks for the feedback, ill definitely try to include a bit more theory/visuals. I've mainly tried to keep my videos on the shorter side since people are more hesitant to watch longer videos. But ill try to include more theory and visuals to explain what is being done, especially for this topic in the future. For this video specifically, the real magic is all done with the Loop, updating the uv's, and drawing the texture. I explain the main logic/theory of its workings below. Raystep is our View Direction. With each for loop iteration we are adding a small distance in our view direction to the uv. This is stepping our sampling point along the view direction and then sampling the texture again with the updated uv's. Therefore creating the illusion of slices of this texture in depth.
@@renderbucket I was able to visualize it in my head before your comment, and your description now still lack the core of it, which is really best illustrated in a 2D drawing of a 3D - If I had to describe it I would phrase it to start as a unique 3D vector which is a ray-direction from the camera to the shaded pixel (a-la raytracing) where it's 2D part is used for UV sampling along the XY plane of the cube (Unreal is Z-up) - then a scaled version of it is used to step inward from the camera towards the shaded pixel, each time sampling again using the XY components as UV, until the sampled value reaches/passes a threshold. The core point to note is that whenever the ray being marched reaches the cube's plane where the corresponding sample of the texture is 'outside' the shape, then it keeps on going into the cube, re-sampling the texture at each step until the sampled value reaches the threshold around the shape's edge. For different ray directions aiming at different shaded pixels, the cut-off point of the texture's shape edge ends up being at different depths into the cube (depending on the ray's orientation to the cube's XY plane) as-if along a stack of copies of that texture. If you haven't explained this core point you haven't really explained anything - you're just shown a 'magic trick' with obtuse shader code. Ideally show it using a drawing of how that stepping and thresholding looks in 3D, draw the stacks, show the scaled ray-direction stepping and it's 'projection' onto UV space (XY) and how that's used in the thresholding. Also in code, I would have demo'ed writing the vector stepping first, and only then write the threshold exit condition part, as that ordering followes the order of intuition build-up