Hey guys! If you like the video, you might also be interested in my recent blog post about related topic - dependencies and communication between modules: binaryigor.com/modular-monolith-dependencies-and-communication.html
Hey, If you like pure numbers more, I have published a summary of extended test results on my blog: binaryigor.com/how-many-http-requests-can-a-single-machine-handle.html#summing-it-up Cheers!
Hi, man... i'm just here to say "Thank you", it's not really about education, for me, its about knowledge and happiness... You provided that. Directly from Brazil, thank you, again!
My pleasure! I am also learning and enjoying a lot: before, while and after making these videos; it is really hard to tell who benefits more - me or the audience ;) I love to share those learnings and findings, so I am glad you find it valuable!
Hey! If you enjoyed the video, you might also enjoy related article on my blog: binaryigor.com/kubernetes-maybe-a-few-bash-python-scripts-is-enough.html
hmmmmmm 6 videos 1hr long on todo app intresting (what r u even teach8tgat its taking so long ) i ma really curious what you have to teach 😅 i am in your hands (finally youtube recommended me something that i couldn't ignore)
Hey, It is quite long for several reasons: * I use minimal frameworks/libraries * I write all/most code live on the video * I do both frontend and backend * We even write integration tests * We even prepare a Docker image, so you can almost deploy it! On top of all of that, I try to explain everything as I write the code, so that is why it takes so long :)
Hey, Unfortunately, that’s quite complicated, but can be done ;) Why? Each shard holds only a portion of our data. Which data belongs to which shard - we decide this in our application level router. There, we have an algorithm that decides where to write a given piece of data; it absolutely depends on the number of shards. So, if we had 3 shards for example, changing it to 4 shards will completely change our data distribution - both reads and writes will expect data to possibly be in a different shard than it was previously (when we had 3 shards). So, anytime you want to either add or remove a shard - you need to reinsert your data, at least some of it. To do that, we would need to write a dedicated procedure/script. How can we approach this? I would add a fourth shard (we had 3 previously). As a consequence, some reads will not work, because they expect the data to be in a different shard - that’s probably fine for a while. If we want to be more sophisticated, we can have a more elaborate algorithm, where we try to read from one of 4 shards, and if we don't find anything there, we can try to read from a shard according to the previous data distribution (one of 3). Regardless, from that point forward, we will write new data to correct shards - after reconfiguring our application and letting it know that we have 4 shards right now, of course. We then need to have a script that would do something like: * Scan each table on each shard * For every row of every table: check whether this row should belong here, according to a new shard distribution (4 vs 3 previously). If not, insert it into a new shard (if it doesn’t exist there), then delete it (remember about no transaction guarantees between shards!) * Make sure that you run this script after your application has started to write data to a new number of shards (4 in our example); otherwise it will not work as expected! * Also make sure that the rules I quickly described here work for you application write patterns, because it might no be the case, you might need to change it a bit Not easy at all; that's yet another reason why you should avoid sharding as long as possible and then re-shard as rarely as possible. Hope this helps ;)
Something I've run against is if your server side language that's generating your HTML and supplying the web components ito HTMX isn't JavaScript, some web component functionality can't be used: you can't pass properties into the web component directly as you can only pass HTML attributes (not JS properties), and you can't "new" up a web component and pass anything into its constructor. The web component has to be designed around attribute based config, and not all of them are. Not sure if there's some trick around this, but it doesn't seem like web components are as portable as I'd hoped unless built a specific way.
Unfortunately, yes. The one way around is to take an object, turn it to json and then to base64 (to guard yourself against special characters). Sadly, you then also need to repeat the process in the web component; it's hackish, but it works:P ...on the other hand, I feel that you can do pretty much anything with the attributes-based approach; granted that it's a design choice and you might not be able to reuse some of the Web Components that are available out there. Maybe we need to have separate collections of Web Components for server-side vs client-side rendering? Or maybe some common abstraction is still possible, yet to be discovered? I don't know yet, but will definitely continue to experiment :)
For older people like me (mid 50s) who have seen IT trends come, go and then come back in cycles, it's interesting to see HTMX described as a different way of doing things. To me it is a return to how things used to be done. HTMX seems to be what HTML and AJAX would have become if javascript hadn't become so popular. I'm really excited by it. If it is adopted more widely then it will mean that for many projects we don't need JS on the front end. I'm not saying JS will go away, but the use case for it reduces. If it is used less on the front end then the argument for also using it in the back end reduces too. That opens up the possibility of a whole new tech stack such as Go with HTMX. It could also mean a return of PHP which was originally intended as the back end web language and has improved a lot in recent years. Personally I will be looking very closely at Go and HTMX.
Thanks for the thoughtful comment! I would partially agree ;) To some extent, HTMX is just server side rendering 2.0. We had it before, but it required a full page reload, which arguably diminishes user experience in many cases (not all, I go into the details in my blog post). HTMX allows us to have server side rendering, but with the experience of Single Page Application (native app like). We rarely need to write JavaScript thanks to it, but technically it is a JavaScript library, so this language is still essential, even though it is hidden. You made a great point about the push of JavaScript into the server side - it might as well turn out to be driven mostly by the proliferation of JavaScript to the frontend apps. Many people naturally wanted to work on both frontend and backend with less of a need to switch context between multiple languages and tools. In the long run, better technologies win, so I bet on HTMX - it is just a superior technology in many (not all) cases ;)
Web Component it's just a static file, so you can serve it in the same way as you do with your CSS or other JS files. Usually, I would recommend Nginx or any other server optimized for serving static files. You can also use Content Delivery Networks that are available from multiple cloud providers ;) As for serving it with Node.js - this is exactly what I did on the video! Here is the relevant code snippet: github.com/BinaryIgor/code-examples/blob/ac0cd2c48e23ecb26e2f8d62cee605331bd5e854/htmx-web-components/server.js#L33C1-L43C4 Good luck!
@@BinaryIgor Yeah i saw that but i'm using a backend framework for node so the solution you provided doesn't help that much since it uses a jsx render. But ill look through the video once again. Thanks for answering Edit: found out why it didn't work, I didn't know that the browser couldn't understand ts so It needs to be transpiled to js for the browser to understand. I'm using hono as a framework btw.
@@more-sun Yeah this was the other thing I was going to ask: what is being done with your final JS output, which is consumed by the browser? ...but in any case, you should be able to define your Web Components separately and use them in the same way as you do with div, p or button ;)
Yes, HTMX is awesome! I hope we will see some production-ready components libraries soon, built for it (maybe I will create one, who knows). As far as config files go: you mean my example or SPA projects in general? Because here we have only three .js files and one is a server written in node.js :P
To be honest, htmx has a pretty decent documentation, full of examples (htmx.org/docs), but you do need to have basics in place, which are: * some basic html, css and javascript * elementary understanding of http protocol * ability to write some basic back-end code in whatever language/framework you prefer In the video, I've used an example app (built in node + typescript + tailwindcss, almost no dependencies, pretty minimal stack) written by me from scratch. You can find the source code here: github.com/BinaryIgor/typescript-experiments/tree/master/some-wisdom-htmx-app So it depends on your current knowledge and what you would like to achieve ;)
I've made a video where I create simple, single index.html page from scratch :) Enjoy it: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-A3UB3tyDWa4.html
Depends on what level ;) There is a hx-history attribute (htmx.org/attributes/hx-history/). By default it is set to true, which means that on the back/forward button, htmx caches pages in the local storage and try to use that, making http requests only when it doesn't have a page in its cache. But it can effect your app behavior, so for that reason I have turned it off, as you can see in this file: github.com/BinaryIgor/typescript-experiments/blob/master/some-wisdom-htmx-app/src/shared/views.ts Additionally, htmx model is that you return html pages from the server (mostly), so you can use ETags (developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) for example. That requires writing some custom code on the server, but it's an old, standard approach to caching in general, so shouldn't be that hard to implement. Then, you would return new page from the server only if it has changed, and return 304 code without content otherwise. Does it work for you, or you have something totally different in mind?
Thanks for the video. But this is poor's man sharding, right? I thought in the video you'd make use of some sharding feature of postgres, but you literally created 3 instances that don't even communicate with each other. I would just make it explicit next time, but also I don't believe anyone is willing to do that in prod. Regardless, good video!
Hey ;) I don't think it's poor, it's just an application level sharding. In fact, Notion (which is pretty big), have used (they probably still do) it in production (similar approach), and they have even written an amazing piece on it: www.notion.so/blog/sharding-postgres-at-notion. The problem is, that Postgres doesn't have build-in support for sharding (like Mongo has for example: www.mongodb.com/docs/manual/sharding), you have to built it yourself. There some third-party solutions, but I don't know if they are worth the price and hustle. Furthemore, I wanted to present the core concept and the trade-offs involved. Mostly, that you do need to by aware of you sharding schema/strategy and there are many consequences to it (for reads, writes, consistency guarantees and future scaling to more shards, physical databases, at the very least). Lastly, databases that do support sharding natively (like Mongo) they do similar things (they have some kind of Router, that parses your queries and is fully aware of your sharding architecture) to what I have presented here under the hood, so getting through that will help you understand the trade-offs behind sharding (especially what happens if you choose your sharding key poorly and most queries need to hit all shards, not only one). Regarding dbs not talking to each other - that's the whole point of sharding ;) You split your data throughout a few db instances (shards), each of them has only part of your data. That's why it faster (mostly), you take 1 big database and turn it o N small ones, and small is always (almost) faster ;) Cheers!
In application level sharding, the application handles the communication between nodes, in database centered sharding, the nodes do communicate with each other
@@AC-hh2cb Pretty much yes. With the database centered sharding, you issue queries to some kind of Router who is responsible for querying appropriate shard, assemble results from multiple shards (if needed) and so on.