I really supported your channel before, but this makes me question it again... please stop using AI images as your thumbnails, using that technology means you agree and support it and how it was created even though we all already know that it was created by stealing millions of people's art work without permission, compensation, or even any credits for them. Let's stop making thievery seem like a normal thing
I seem to recall they were thinking of considering a PC release for the future but it appears that nothing came of that. Then again, you don't hear much about Dreams anymore either. I'm sure some still use it but it seemed to exist in a large but fairly short lived bubble.
@@Elwaves2925 yeah, had it not been PS exclusive it would've been a notable product for sure. Not long ago I saw a level design talk that was made on Dreams and it was quite interesting, since at one second you were looking at the key points of a topic just as if it had been done in PowerPoint and the next you're immersed into an in-engine example, all seamlessly. Like it looked as if prototyping was really fast with it
@@Elwaves2925 I think the devs stated at one point that they completely overshot the mark with Dreams in comparison to LittleBigPlanet. While Dreams allows you to do basically anything in it, it has too much freedom and ends up being way too complicated. Like OP said, its like a whole game engine. LBP was more limited, but was also FAR simpler. I think we are going to see a real LBP4 in the future.
I can say as someone who sculpted religiously on dreams VR when I had my PS4 it feels fantastic and is very intuitive to use. I honestly think it’s the best implementation of 3d modelling I’ve ever seen.
I agree while not really great for ultra high detail it was fantastic for getting shapes in quickly and stood out as a sculpting tool. I think part of the reason dreams failed is because they used the same controls for modeling for everything, including all the "logic" or code. That and the limitations of the "dreams" meant that you couldnt do anything too crazy in the engine without breaking it up into many levels. If they had a pc port i wouldn't be surprised if dreams was still relevant today, especially if you could export/import models. It's very similar to the epic "marketplace" but was all free. Having something powerful with completely free/sharable models/code/everything game design would be a massive for indie devs.
I see a few comments here claiming that «Metaballs» and «SDF» is «the same thing». However, in the original «Metaballs» paper by Blinn, called «A Generalization of Algebraic Surface Drawing», he used an Algebraic method to calculate intersections between a ray and a shape directly. SDFs uses a Geometric method; an iterative approach, which is a bit less optimized, but much easier to work with when constructing more complex shapes. The original SDF paper was called «Sphere tracing: a geometric method for the antialiased ray tracing of implicit surfaces» and was written by Hart. Both papers are amazing; I can highly recommend them.
Dreams did use the SDF Method for modeling, but I believe the models we're saved as point clouds. That's why it wasn't easy to export or import models from other applications.
I think they were saved as SDFs, and that's sufficient to explain why importing/exporting was difficult. Other software almost exclusively relies on texture mapped polygon meshes. If I remember correctly, the SDFs were rendered as point clouds though. Usually they are rendered via ray marching, which doesn't involve point clouds.
@@cube2fox "i didn't say we were raycasting *analytic* sdfs. we combine analytic sdfs at load time into volume textures then absolutely we do raymarch those volume textures to produce the 'unsplatty' look as in the video. we do splat the loose stuff, but that doesn't apply here." By the guy who co-wrote the Engine on Twitter (x) 🙃
But it makes a bit of sense because not many people learned how to use it properly because of it's complexity. Most people used it to make some basic things and stopped there so Dreams got flooded with bad games/projects which hurt the player base. There was some gold to find but you had to dive trough a lot of obstacles to find it.
Like their other games though such as LBP series. These things are great for creatives, but many gamers are NOT creative. They don't have the patience or willingness to stick it out. Often you just get cheap clones, and clones of cheap "push to advance" or "text" style games that have little variety and just basically feel the same all throughout. I loved playing LBP and trying to learn how to use it, but it was just a chore to get through to find stuff that was ever really good from all the chaff. I never got Dreams, but I imagine it suffered the same fate.
@@robinhorneman2245 Some people like myself never got dreams because their was no path to make money of the things you create with it. Without that its basically just a complex toy.
I wouldn´t call it the future of 3D modeling. It´s already being in use. And Boxmodelling is still a thing. SDF creates too many polgons vs a clean hardsurface topology. SDF is core in a lot of procedural modeling stuff in houdini or meshing simulations in general. But it will not replace other modeling methods for good, so I wouldn´t call it the future.
Love your video man! I'm making a video game called Hard Chip (on steam), where you build huge semiconductor chip. It uses SDF with a raymarching algo and voxel to split the render into discreet cells (and with Monogame/C#!). Like you said SDF is old, and various implementation/use cases exist. My two cents on the topic from my own experience: SDF has major drawbacks and major advantages. If you don't play into its strengths, it’ll be a huge pain. So it’s hard for people to use it because you have to understand what it is good at first. If you use it for something for what it is inefficient. You’ll get poor results and then it’s easy to just discard it and forget about it. It’s not a good all round solution. But when you play into its strengths, that’s really game changer! For Hard Chip, it allows the game’s circuits to scale like crazy, without having to resort on cube like aesthetic.
@@Polygarden I might be wrong, but I believe the original «blobby objects» paper by Blinn used intersection algorithms rather than sphere tracing (later called SDF).
My right year old son has been modeling 3d art projects with Womp for a few years now. I strongly recommend it for anyone introducing a child to 3d modeling. It's very easy and intuitive.
Not really a foundation, but one of the methods of simplified representation of geometries to optimize speed. They combine many other tricks (voxels, light cards, screen space) and the code is constantly evolving. They especially need all these "hacks" because of the virtualized micropolygon geometry (Nanite) that makes tracing actual geometry very difficult.
@@kazioo2 basically is the basis for the software ray tracing method. The light cards are used to update the surface cache for light sampling at the trace hit position. I believe voxels are not used anymore, that was sort of a fallback method in an older implementation.
I think Lumen (only) uses SDFs for a simplified scene representation in the software RT version of Lumen. The version that uses hardware RT doesn't use them, as far as I am aware.
Super interesting video. Thank you for that. I'm getting more and more annoyed with modeling in Max and Blender. Clavicula will be tried out right away.
It's not the future simply because creating meshes in Blender will always be faster for someone who is good at it. It's cool tech for those who can't though.
I think its future lies in making 3D modeling possible on devices that are otherwise too cumbersome for more traditional methods of 3D modeling. Dreams shows that SDF is a great solution for anyone that, for whatever reason, may want to model with a game controller. Perhaps in the future, SDF comes to Blender and as a result Blender can be comfortably used on a Steam Deck or other handheld gaming PC, for instance? I played an extremely niche PS2 game once called "Graffiti Kingdom," which allowed you to model your own player characters with the PS2's joysticks, and it probably would have been far less cumbersome if the game had Dreams-like SDF modeling. I imagine you're right, though, that for desktops and laptops, it's tough, if not impossible, to beat the speed of box modeling in Blender once you practice the hotkeys and other shortcuts until they become muscle memory.
I remember you covering Neobarok once and you said it was weird then! Nice to see your pronunciation of "weird" has changed almost exponentially with its weirdness. 🤓
They have. But they’ve only recently come into focus. Not sure why the sudden interest, but there definitely has been an explosion of interest very recently.
@@airman122469 It's probably that computationally we can make modeling software that won't stutter all the time from a moderately complex SDF model anymore (depending on rendering technique like raymarching or voxelization and meshing), but it's a reeeealy bold claim that this technique is anything new imo
One thing I like about SDFs is that you don't need the world space mesh conversion until the very end (if you want to use the model later in traditional 3D software). Instead, what you see on screen can be splats or adaptative mesh or raymarching or whatever, essentially having infinite resolution for curved surfaces. Mudbun or Clayxels for Unity come to mind.
that's awesome! I was hoping for something like dreams to come to PC, didn't know this kinda technology was called SDF. Always liked the more organic kinda look you would get from dreams models compared to blender
Unity’s TextMeshPro uses 2D SDFs; I think Freya Holmer’s Shapes might as well. I’m not sure they have a big future for 3D models in games (without converting to polygons) because of the cost of raymarching, but I am surprised they haven’t been used for colliders.
You know many people's first ambitious attempt at an SDF model will likely be SDF-01 just because "Signed Distance Fields" and "Super Dimension Fortress" share the same abbreviation. Mine included.
for something that doesn't have to be rigged, animated and then used in a commercial game engine SDF can have some purpose. However, to use it in mainstream engines, that SDF is a pain in a butt to work with. Yes, you can maybe import it as a static mesh inside unreal, turn it into nanite and let it go. But to use it for character designs and animation you will have to do retopology of damn thing, turn everything in a proper triangle amount, redo UV, and then everything else as you should do with proper poly geometry. That said, SDF is in my opinion exact the same thing as something Zbrush intorduced with it's technology long time a go. It was visually stunning but in order to be applied in real application it was almost a reverse engineering thingy.
The thing that fascinates me about SDFs, is that they're similar to neural radiance fields, when a neural network is trained to recognise faces, they're kind of building an SDF representation of an average human face, encoded in the neural weights, which is sort of what your own brain is doing too, it's all relevant to AI and brains and it's so cool! 😃
This is like live boolean meshing which is very engaging for playing purposes, can be good to make organic environment things like rock formations; but to make actual props and complex characters, mesh modeling still is the way.
C4D motion graphics users use SDF with the volume mesher system plenty. It's pretty powerful, especially alongside Zremesher that's in C4D by default since Maxon bought Z-Brush
The basics of sdf is volumes or if you like voxels that based on density determinate where to draw the surface, sign negative inside and sign positive outside(maybe the other way around) and that's the sign in the name. Thats why it seems organic since you can use noises in the volumes which are vbd mostly(for optimization), I even think sculpitris from zbrush and 3d coat use it in the background and Houdini has a lot of tools and has been using it for years and is no just for organic modeling, you could use it for hard surface too with no problem, I think this video was done without doing a good research, even unreal I think it uses sdf for creating approximation of the geometry for lumen
Will you be covering the Humble Bundle by Q-UP Arts featuring virtual instruments and other audio assets? Some of it looks appealing but I'm not sure if the bundle is mainly .wav files or plugins for compatible DAWs only. Regardless, thank you for all the videos you make. It helps tremendously when searching for new tools and assets! :)
I have what he is talking about. It was once called Oculus Medium. And is no longer supported by Metta. After Adobe took it over, they didn't do much with it for some time. Then, they finally made an update, and added it as part of the Substance collection, sort of. Substance Modeler is basically Oculus Medium at its core. But made a bit better now. I have not used it since Adobe took it over, so I don't know a lot about all the new changes to it. From what some have said, those that still have a copy of Oculus Medium, and try to run it today, it will not work. Likely all thanks to Adobe telling Meta to lock it up in an update to it, so it can't be used anymore. >.>
ConjureSDF: free updates for a year when entering the alpha version. But then it shows the prices for the next version up next to it. But if it takes over a year to bring an update, then you are not actually going to be getting any more than the alpha version for 40 bucks? If all three updates come out within that year, then you are getting the package for a 'early supporter' fee of 40 bucks. The 1 year of updates is misleading, unless they've given a strict roadmap for when they are getting to version 1.
It's OK, Minecraft is still "voxels". They're just scaled bigger than what people practically use the term for, so the colloquial term "boxels" had being coined at some point, but they are voxels in every way.
I found out about SDF a few years ago and created some code to generate shapes and scenes of geometric shapes. Many an application displaying fractals like Mandelbulb uses SDF rendering. This is all done in a fragment shader which was rather limiting. So I ported the code over for a CPU to use to generate graphics and as I suspected, it was super slow as SDF require parallel processing to have any form of real time graphics display and interaction. I thus abandoned SDFs as using openGL in a fragment shader was not easy to code and get the results anywhere near a conventional 3D modelling software app like Blender. Things may have moved from that time as faster GPUs and perhaps more code friendly methods to create SDF applications have evolved. But from my experience, things like reflections, transparency and texture mapping are very difficult, more processing intensive, if not possible at all. I could be wrong, but SDF do have a place in 3D modelling and rendering in certain circumstances, just not as generalised as conventional polygons or point clouds.
so it's basically like a human heart. first we put a heart shape base, then a girl comes and we'd thought it would make it bigger, but it suddenly changes to subtractive and we ended up with a hole in our heart..
SDF are big part in success of many big games. One of which with impressive use of them was APB. SDF been around a long time and a crime it isn't used more than it is.
Houdini is the biggest and the simplest in the same time 3d software (due to it's simple and consistent logic compared to other "simple" but very inconsistent opponents). And if you really want to play with SDFs, this is your friend.
I don't think we'll see it in "a couple of years". Once Maya and big companies have begun to move their workflow. Yes, I feel we're going to see it. This will take a decade, at least, to train and teach everyone how to move from polygon modeling.
Every time new software comes out, everyone says the same thing. "It's the future" but the current present is quite far from being a good future in the industry, and none can compete in digital sculpture with Zbrush.
@@JeffreyThrash It does nothing that cannot currently be done with current software. And about all the promotion it does of Adobe,... Many artists are already leaving their software behind to use open source and lesser alternatives, due to Adobe's policies and its theft from users.
@@Daniel.F-3dart I played around with Blender's forgotten Metaball technology a bit, and I suppose you're right, it's hardly new or even innovative. Still, I wish I had learned about this stuff sooner for quickly making character base meshes, plus using it combined with the auto-retopology tool QuadRemesher does introduce some faster workflows for me.
Sector modeling is the future, I am currently creating unique generators for different character types. Hopefully this will help me create AAA+ games all by myself instead of hiring an army of modelers.
I used SDFs in my master thesis to sculpt selections in volumetric data: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8AyW80ETPDc.htmlsi=D2Z4qPbHVdvOfUj1 It is a very cool way to make shapes!
@@trachinusdraco when did magicacsg get Linux builds. Thats the only option I actually like. Everything else is either impossible to navigate or in alpha or owned by Adobe.
I think Volume Builder is Voxel based, but it combines the best of SDF and poly modeling as you can transfer any poly model mesh to it, as opposed to being limited to SDF shapes. SDFs will have better performance tho.
i said when i first tried dreams that playstation dreams could have been the best 3d game engine in pc, but playstation forced you to use it using a controller, so it never was. prototyping art should be handled by the engine. i don't understand why real game engines done get that.
I could see it useful for some things, but I don't like hoping around to different 3D modelling apps if I can avoid it. So, if I used it, it would need to be for something that would take ages to do in Blender, but be super fast with SDF(which seems niche, but I've never tried it, so idk). I may give it a whirl at some point. Might be good for making some trees or interesting alien flora. If it's as weird as you say, then I'm looking forward to the Clavicula video.
@@gamefromscratch If the stock image came from Adobe Stock or possibly Getty Images, then chances are it could be an AI image that has been uploaded to the stock image site like a normal image. This is why I'm actually buying reference books these days like I'm a 20th century artist (e.g. Peanuts creator Charles Schulz had a whole library of references he would use for daily comic strip ideas or drawing more complex stuff correctly). It's getting harder and harder to tell whether that image you downloaded off of Google Images, Pinterest or other sites is actually a photograph and not AI-generated. Obviously, references are only useful if they have 100% correct lighting, anatomy, perspective, etc., and all the AI images flooding the Internet and crowding out genuine drawings and photographs make this a lot harder to determine, now. Buying reference books, especially ones that seem to be made before 2015 according to the copyright pages, is the best solution I can think of for now.
It's not going to take over every part of modeling just the stuff geometry nodes can't already do in Houdini and Blender. Can it help? Sure but for the most part not many people need it. At the end of the day the software used is just a tool and nothing more. There's already addons, for example in Blender that take care of things like boolean meshes so this isn't ultimately necessary.
Is SDF not just a variant of Metaballs? I mean from a surface level (no pun intended) there doesn't seem to be much difference in the workflow, rather maybe a difference in the underlining math of the software. So if they are so close to one another, why are SDF's likely to be worth investing in when Meta-balls just vanished from practical use.
AI is the future of 3D modelling. I have been using AI drawing softwares starting with Midjourney and since last year I have been a regular user of Leonardo AI since it is an easy to use brwoser tool with lots of AI models combined into one since interface. I am using it for professional use. How the creation process goes? Well I sketch the illustration in my mind on Photoshop like I had been doing it since 2010. Then I upload the colored thumbnail sketh into Leonardo, use the summary of the illustration to work as my prompt. From the variations I pick the best ones back into Photoshop. I mash pieces of them on the canvas over my sketch. It is rather easy matte painting work, the difference is you don't use photos or renders of 3D models but use AI generated image parts and pieces. Then I reupload the piece into Leonardo again. This goes on as much as I reach a queality that is satisfying enough in industry terms. No my work does not look like generic AI creations as the actual composer is not the AI but myself. Also you can't find any sh*tty parts like crazy eyes or spagetti fingers as I correct them throughout the creation process myself with a stylus on my Wacom Cintiq as usual. Why I shared this rather personal and unrelevant working process of mine you may wonder. Well this is because the future of 3D is similar to the current 2D. There will be prompt input softwares where you make simple objects (OBJ's) or import them, then input your prompt, then develop over it. And finally I am sure you will be able to do retopology, pick the triangle or quad count, select if it will be in triangles or quads, let AI define the eyes, arms, legs etc. for it to make them seperated nicely for rigging and animation etc. So it will be like a software for 3D modeling where you won't bother with anything like anatomy or tessalation etc. It will be automatic, intelligent, fast like very fast. I create a 2D digital art that is twenty times quicker than how long it would take if I drawn it myself with limited reference use. Same will be there for 3D artists. And this is the step before total prompted CGI animations or game assets. Yes there will be a time, you will write prompts and a game is created for you automatically, you will playtest and while doing so you will be reprompting and polisihing it until you are satisfied with it.
All the game demo's you see on First Strike Games RU-vid channel are made using Dreams. Including the primary IP Mad Magic Carpet Land Sadly Dreams is now toast. The product had the potential to be a superb game engine but it seems Media Molicule didn't really know what it was themselves A game engine? A game? Too little memory, locked to PS4 and the voxel modeling system too limiting. It was great for rapid prototyping though as everything from modeling, animation,logic,and even music was under one roof and the gadgets or nodes very intuitive. All new dev will be done in Unreal.