PlanetScale is building the leading MySQL-compatible database platform that aims to reduce the cost and effort associated with managing database infrastructure while making MySQL near infinitely scalable.
With PlanetScale, you get the power of horizontal sharding, non-blocking schema changes, and many more powerful database features without the pain of implementing them.
Make sure you subscribe to learn more about MySQL, databases, and scaling your applications.
Are you referring to our Learn Vitess or MySQL for Developers? Those can both be found here: planetscale.com/learn/courses/vitess planetscale.com/learn/courses/mysql-for-developers Are you looking for an additional resource to learn more than these provide?
@@PlanetScale yes, Im looking for resources more than this. The above resources uses vitess /mysql examples. Would be great if I get any docs or videos where they even write those example scripts, explanations of them,.
@@raghu8705 For Vitess, here's a few resources you can try: The official docs: vitess.io/docs/ Our Vitess page: planetscale.com/vitess (also has links to other pages with more info) You can search around for Vitess on youtube, and there's a number of conference talks. For example: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-H4B5zLBfGN8.html
Could you please explain what do you mean by "Scrolling and Throwing away that offset data". I believe you are talking at memory level things. It would be so helpful if you explain and show how pagination works at disk level,how pagination queries select data from disk partitiones stored in Binary tree format.
You're probably referring to cross-shard (or cross-keyspace) queries. There's a blog post from Square that talks about these with Vitess that you might find interesting: developer.squareup.com/blog/cross-shard-queries-lookup-tables/
What database are you looking for the hostname for? Your PlanetScale database? One elsewhere? If you are currently trying to import a DB into PlanetScale, I recommend opening a support ticket: planetscale.com/contact
I have a question for you. I'm working with a table which has more than 3 billion rows and each month new 20-25 million rows are being added. And we have created 5 indexes for all efficient searching but when we access data for even a month it takes too long. And the recent data of the week access is good, the data is loaded fast ( here we also just access recent data for that month) but when our recent record touches more than 3-4 million rows after 4-5 days the quary again becomes very very slow? I'm thinking of creating a separate table for each year for balancing the load but I'm not able to understand why for 4-5 day of the start of the month it's working good but then after 4-5 million new rows it's becomes slow again. It's not making sense to me? Any suggestions will be really appreciated.
Interesting scenario! Hard to say for sure without more details, but a few things could be happening here: (1) The queries fetching only recent results are able to mostly hit data that is already in memory. The ones that fetch larger and older data sets might have significantly more I/O, leading to drastically slower query times. (2) You said you created 5 indexes ... are you confident they are being used appropriately? If you are using planetscale you can use Insights to figure this out. If not, you can try running some EXPLAINs to see whats going on. You might find this useful for investigating: planetscale.com/blog/identifying-and-profiling-problematic-mysql-queries
Hey, thanks for the walkthrough. Those anomalies detects similar latency issues. What’s different between the sort option in ran query table vs anomalies? Also does the anomalies detects some SQLi or any of similar things?
In short, anomalies happen when there is a spike in queries executing *slower* than the 97.7th percentile for the query pattern. A query could have long latency but not trigger an anomaly (for example, if that query typically takes a long time). You can read more about anomalies here: planetscale.com/docs/concepts/anomalies
@@raghu8705 Yeah! You would see evidence of an anomaly in that tab as well. BUT not all query latency necessarily is an anomaly. When we call something an Anomaly, we are basically saying your database is not healthy. There's something abnormal going on that you should look into. Versus if you're looking through the query latency metrics in that Insights table, there may be some high query latency that yes may not be great, but it's not unusual and doesn't mean there's an issue with your database.
@@PlanetScale how is a row having a relation with another shard handled or vice versa? too many shards having too many relations with each other, what are the patterns applied in this sense?
@Xaoticex You are probably referring to "scatter-gather" queries, or queries with cross-shard joins. Good to know that there's interest in this kind of content. Here's some additional reading if you're interested: developer.squareup.com/blog/cross-shard-queries-lookup-tables/
THANK YOU SO MUCH for this easy to understand explanation. You helped me make my first replication of my server. I have been manually backup up my database each day. This makes life so much easier. Now that I have done this and have 2 raspberry pi at my business sync I am thinking I should have a remote one at home just in case of fire or other catastrophies as I am in floriday. I know the static IP address of my business but how do I set the Master Host ip address for this? I am assuming a port number? I appreciate any incite. Thanks again for the video.
Glad you found it valuable. For the scenario you described, it depends on how things are configured. Does your primary server have a static + public IP address? If so that would make it simpler. If not, you might need to set up some kind of DNS-based solution.
Don't get me wrong, it's a great video. But i'm still a little bit disappointed because there is so many parameters to control the buffer cache and i was expecting more. Please remember that a "good" hit-ratio does not mean that your application is tuned.
Absolutely! This was a very simple benchmark. Real-world workloads are going to be much more complex, and you may need to do some deeper analysis to decide how to tune your buffer pool. If you use PlanetScale, we spin your DB up with it pre-tuned.
It's sad that people think the Y2k bug was a hoax - it was every bit as terrifying as they claimed, we worked our butts off to fix it. At one point I was looking at DB table with six digit dates to add a century only to discover that different programs (Link on a Unisys mainframe) were saving as YYMMDD and DDMMYY... that was fun. It was a crucial table in NZ's social welfare debt management system. The scope for REAL harm (not just financial carnage) was huge. As for why we let it happen? I can remember like it was yesterday my manager saying "don't worry, we'll have replaced the system by then" and never getting budget?!%!
Hey @Planetscale, on my company, we are HEAVILY dependent on AWS RDS, how to convince my team to be flexible and migrate one or two of our MOST CRITICAL application?
@@SentinelaCosmica We have several customers that moved over from RDS and are incredibly happy. Better developer experience, near infinite scale with horizontal sharding, and pricing is usually similar or better. Migration from RDS to PlanetScale does not involve any downtime either. Happy to send you more info or set up a call with your manager if you'd like! Email me holly@planetscale.com.
How do you guarantee that the bounding box condition is determined by the engine, before the costly distance calculation is even performed? Do the order of the WHERE conditions that matters?
@@EpicAchievementGuidehi! How to know if the name you provided is correct? Do they have a video introducing their team or a website featuring the people of PlanetScale?