When I went there about 3 years ago as part of a school physics trip, saying that the server room is a warehouse is an understatement. Heck, it has a transformer outside the size of a small house to power this. Very impressive stuff!
That's what I saw too - All of the servers faced inward to the sealed cold aisle, which they drew their intake air from. While it decreases the amount of reserve cold air in a power failure, it also makes the data center bearable to work in (without a coat...) as opposed to the hot aisle which uses the outside space of the data center for cold air resevoir and pairs all of the servers back-to-back, creating rows of hot.
The glass enclosures are referred to as "hot aisle containment" and they are meant to keep the hot air separate from the cold air. Mixing of the two can cause the servers to draw hot air through openings in the racks. Also, high ceilings are actually good for data centers. Since the hot air rises, the stratification also reduces mixing problems and can increase the efficiency of the cooling system. Servers get cold air, AC units get hot air, everyone's happy!
The containers are cold aisles. The areas you walked are "hot aisles" where the fans from the servers blow their hot air. Cooled air comes from the floor / ceiling in the cool aisle, is sucked through the servers and out comes the hot air. Airflow is a big factor in datacenter design.
The glass sheltered rooms are the so called 'cold lanes'. Cold air is blown into these lanes, forced through the computers, and then leaves via the hot lanes. In this case the hot lanes comprise the entire room. This system of cooling is very common in modern data centres, as it efficiently gets the cold air where it is needed most, instead of wherever the AC vent is.
There are some finite limits on the size of transistors, which we are brushing against these days, based on the fact that the tiny currents in the transistors can quantum-tunnel around to the wrong places if the transistors are too small. Veritasium had some good videos on the subject a little while back.
Everytime I see amazing feat like this, I just want to show it to all those conspiracy theorists who think that we can`t build the Egyptian pyramids with modern technology.
Unless I'm mistaken, the main reason tape drives aren't used is speed. Specifically seek time. Common hard disks have a seek time around 10 milliseconds. It's not uncommon for tape drives to have seek times of several seconds.
I think the blue ducts are blowing Cold air into the enclosures you see through the floor. The servers then suck in the Cold air from the front and blow it out the back. That way you don't have to cool the entire hall, but only the small corridors between the servers and extract the heat from the ceiling of the hall.
Whenever an irregular noun changes its meaning it becomes a regular noun when used with the new meaning. There are two common examples used to express this simple rule: Louses and gooses. Louses are a group of unethical people, whereas lice are insects. Gooses are a tailoring tools whereas geese are birds. Another example, the Buffalo Bisons. Bisons are baseball players, bison are bovine herd animals.
A generalised version of Moore's law would just say that computing power over cost increases exponentially, and that might be true but it would require having a totally different kind of computing from what we have now.
I'm no expert but I think its because it's still a pretty reliable and durable medium for long term data storage and for its size you can store a lot of data, upwards of 5TB, which is what you need when you're generating just under a gigabyte a second. A quick google of "why are tape backups still used" should give you a good overview.
The Grid. A digital frontier. I tried to picture clusters of information as they moved through the computer. What did they look like? Ships? motorcycles? Were the circuits like freeways? I kept dreaming of a world I thought I'd never see. And then, one day...
Those cables are really neat. If you would see the cable mess at place I work.... that would blow your mind :D ----------- And not to forget. Thank you for great videos!
Yes the limits of the classical (i.e. regular transistor) computing era are very close. Much progress can still be made by optimizing the efficiency of transistors in each computer chip. But for a big leap forward, we need quantum computers (still in its infancy) or carbon nano tubes based transistors or molucular transistors. I'm quite hopeful that computing power will keep making major leaps forward because a ridiculous amount (in the good sense) of research and money is being spent on this.
What is this, "big room computing" you talk of? Brady is showing us a regular data-centre, (that happens to be in CERN, where they do really cool things.)
They made Crysis using CERN's computer, FOR the CERN's computer, thinking that in 2 months everyone would have a massive personal data centre. Sufficed to say, their research department did not live to see it.
lol i just finished steins;gate and within one day alltime10s uploaded a video about time travel theories and now you guys upload a video about cern coincidence? i don't think so ^^
According to the Wikipedia article on Mouse_(Computer) under the section Naming in the second paragraph, it clearly states that both "mice" and "mouses" are acceptable as the plural form of "mouse". Do double check my sauce though, as Wikipedia is not the best place for accurate references. :)
So the front of the computers are enclosed to provide probably cold air--and the backs are exposed as the outflow of hot air. So colder air is provided only to the exact spot it needs to be (the entrance to the system needing cooling).
Their cooling strategies are kinda weird. I can see cold and hot isles, but no roof? No extractors either? Brady, was the cold air coming from the floor?
I would really enjoy to hear about the software that drives this: MPI - Message Passing Interface. I 've used it a few times, but I'd love to learn the origins.
"They're just numbers, but when you think about it, they're pretty high numbers" To make the obvious comment, I can't help but be reminded of the room filling mainframes of the 50's and 60's. Except these are more modular, and fill up entire warehouses instead. I was gonna say, even 10GB/s seems kinda low for today's standards, especially considering how many machines where working on it, but then I noticed this video was from 2013. I just wanna show someone in the future 1:52-2:06 and watch them piss their pants.
Brady, The sealed isles are "cold isles" to control hot/cold separation. That looks to be a raised floor data center, so cold air is blown under the floor and then up thru vents into the cold isles. The hot isles are open, and the hear rises and eventually returns to the top of the cooling units, is cooled then the cycle continues. And yes its "mice"
I don't yet know any quantum computer programming languages. Quantum computers require energy programming (instead of logic programming required for Classical computers), and quantum computers are able to program themselves. The Quantum Age is the dawn of a whole new world!
IF Moore's Law continues at computing power doubling every two years (or the same computing power halves in size), this room will be 1/1024th the size in 30 years with the same computing power.
The density of science and thinking-speed in CERN is so high one out of every 100 people find their minds sucked out by the sheer intensity of science being performed. That, and they got boiled alive for touching one of the processors.
Not particularly powerful nor reliable ones (well, kinda, considering the maximum reliability of Q-comps is 50%). As of the last time I checked, the most powerful Q-comp was able to computer that 3x5 = 15...48% of the time.
In laymen's terms, can someone please explain the point of crashing particles into one another? I understand what CERN has been able to produce in the modern era, but how do you make the jump from particle acceleration, to something like creating the Internet, or other?
That's only because scientist have not discovered the other uses of quantum computing. Keep in mind, this technology is still in it's infancy. When computers were first invented, we only thought they would be useful for simple arithmetic or tabulation problems for large companies. We never knew the direction computers would take just as we will not know where quantum computing will take us.
Yes. There is a limit. To put it simply computers are made of a lot of transistors connected with "wires".(And of course insulators that stop electricity from going the "wrong" way, and a bunch of other stuff). You need at least X atoms to make a transistor, or a "wire", where X > 1(a lot bigger). You can't make a conducting wire or transistor that's "half" an atom across. If that makes sense.(Just trying to explain, the picture is a lot more complex in the real world.)
Even if it was running the right OS it still wouldn't be effective. This is meant for mass data transfer and analysis. It's not suited for real time graphics applications. That said, this would be capable of rendering true to life footage with really high resolution physics waaaay faster than your home PC. But it would render to a video that you play back. It still wouldn't be capable of playing a game like that in real time much better than your home PC
Now now, it's probably running some version of linux. With some tinkering you COULD get an X-windows/Lindows/Linspire shell or even a windows virtual machine going... And that would probably still not run Crysis since it lacks a graphics card. Now the Condor Cluster can run Crysis... but that's cheating, the Condor cluster is made of about 1700 PS3's.
Showing people where the actual hardware exists is interesting. Many people get Blue Gene accounts and don't actually consider the resources that it takes to run super-computers. It just seems so easy to submit a job and have it run on a super-computer in a few days.
If I had enough time, money and aptitude I would love to go back to school, study engineering and try to find work there. I would be happy. Very happy!
I assume that all the heat from the supercomputer(s) is being gathered and used somehow? It would be an enormous waste to just vent it into the atmosphere.
Well the strict Moore's law says that transistors will get exponentially cheaper, which means they have to get smaller to fit into a consumer device. A transistor needs to be about 50-100 atoms across so that current can't just bypass the semiconductor, and researchers are already building prototypes about that size. Even if that weren't true, if the trend continues for 10 years we'd be making parts out of individual atoms, and you can't get smaller than that.
It is using Hot & Cold isle system with cold isle containment & is the glassed in area. hot isles can get very hot 40c+. This is common type of cooling for server farm. There are many types of hot cold isle some with under floor deliver & higher ceilings (Still built like this depending on requirements like fresh air cooling etc) or can be low ceilings & no under floor delivery. There are dozens of combinations & depend on size, location, weather, computing purpose or your particular philosophy
It is using Hot & Cold isle system with cold isle containment & is the glassed in area. hot isles can get very hot 40c+. This is common type of cooling for server farm. There are many types of hot cold isle some with under floor deliver & higher ceilings (Still built like this depending on requirements like fresh air cooling etc) or can be low ceilings & no under floor delivery. There are dozens of combinations & depend on size, location, weather, computing purpose or your particular philosophy.
You are very much wrong. Mouse is *not* an acronym. The acronym was invented afterwards as a gimmick. Second, OED only lists 'mice' as "acceptable" because it was the first listed plural of computer mouse. The issue you're running into is that everyone who has any understanding of how English works has recognized that "computer mice" is grammatically incorrect and have changed to "computer mouses." If you were to spend the time to actually read the OED entry, maybe you would understand this.
This is an educated guess about the isles of racks that have a plastic ceiling: Typical data centers have alternating hot/cold isles. A cold isle will have cool air blowing into it, the servers suck up the cool air, and exhaust the now hot air into the adjacent isle, which then is sucked back up by the AC. I believe that the isles with the plastic are cold isles and the plastic blocks cool air from going up and away from the servers.
I visited a tier 1 in the UK. Its was quite similar, although maybe half the size. They gave you ear defenders to go around. Tape robots where fun. Tape for older data, disks for newer. Hot and cold isles, so that only the cold air went though the computers. (Cold isles had the plastic ceilings) They use a unusual job control system, so that world wide physicists can run their programs in these data centres where the data is, instead of trying to download it all, which would be impractical.
Nop they are not history at all; instead they are quite expensive and provide huge storing capabilities as well as reliability. Their speed limitation emmanate from the fact that they mainly support sequetial access (which doesn't harm if you backup). They are a much precious resourse and the DSS group of CERN IT Department is thoroughly studing the best practicies for using them in our multi petabyte file system.
actually we've developed atomic scale switches using silicon chips and we've even implemented it. we also have silicon based quantum computing at a professional level and we are in the works of developing the computer language to put it to use. with silicon computing we can have soft clients which basically are nothing but an internet connection and a video card with a snazzy screen. these allow you to tap into the processing power of a much larger and stronger computer and is done even today.
They actually do. Not the CERN Computing Center which is indeed used for "meaningless" particle scattering analysis. But there are lots of other computing centers throughout the world that are used for other computation needs, including those for cancer research. (Check Spallation Sources: ISS, SNS, used that way). In addition, "cancer" being a generic term for a large variety of different diseases related to cellular dysfunction. I'm pretty sure a lot of computational power is dedicated to it.
The problem with that is though, that that was purely based on price and feasibility. There are operations that quantum computers just fundamentally can not do even close to as well as a classical computer. Maybe a classical/quantum hybrid computer may be made as a consumer product one day, but it's highly unlikely a pure quantum computer will ever be a consumer product as the operations that they're good at are very specialised and not usually what a typical person needs a computer for.
It would be actually awesome to use these kind of technology for computer games or real-time simulations. Sadly that doesn't work because of the slow reaction time :( The game could look very awesome but it needs a long time to process a reaction e.g. after someone pressed a key. And if it has to get a new texture into memory one of the robots has to get a new tape xD Maybe Brady could ask a few experts the question about gaming and these super computers. Would be interesting :)
You'll notice that the glass-enclosed areas all have a) the fronts of computers on both sides, and b) little holes in the floor. They run cool air in through the floor there in the "cold" corridor, and the computers on both sides suck it in and use it to cool themselves, spitting out warm air into the "hot" corridor, which then rises to intakes at the top of the room. Very cool engineering, and pretty standard in giant computing rooms like this.
Tapes are good for writing data that you're going to keep for long periods. And as for the cabinets that were 'sealed' up, there's a very real reason for that. Those are all disks, and since they're about the only thing in a modern computer that has moving parts (besides the fans, that is), they generate mind melting amounts of heat. They're sealed off so they can force a LOT more cool air, because if the power fails, there won't be enough time to shut down before they start to fail otherwise.
You very rarely use water cooling in industrial environments for a variety of reasons. #1 Because how servers are stacked, one water cooler failure and you could destroy a full tower of server "slabs". #2 Water cooling is complicated to install, making replacing individual parts difficult. With the usage they do, I'd say one piece of hardware fails at least once a week. #3 Ergonomics. Saw all those cords? Yeah, now imagine having 2 big tubes filled with water PER SERVER SLAB.
It's not that simple. You see, if you smash two pieces of bread at each other - you can't predict where the crumbs will go, but you can tell for sure that there won't be any pieces of ground beef. They can reduce the number of variables to an extent, but they can't make the system fully defined, because there are certain things that can't be defined. There's a lot of fun stuff, like Pauli's exclusion principle and Heisenberg's uncertainty principle. Google it up, it's fun stuff, i tell ya.
but they could use non conductive liquids, oil or something like that. Liquid cooling is not that dangerous if well made. You'd just need some above average quality tubing, the retention clips and bolts and aside the pump and reservoir a coolant level sensor to detect leaks in case something fails. If they were so leakage prone, no one would use them, no one would risk a $3000 machine just to run quieter. If it fails 99.99% of the times is because some oversight by the human that built it.
Some data centres are experimenting with a cooling method that involves submerging the servers in liquid coolant similar to mineral oil. Intel has conducted a study that concluded that it is both safe and effective for cooling servers. Also uses about half as much electricity as air cooling. There are data centres such as the CGG data centre in Houston that currently uses this method so it's possible it will be widely adopted in the future.
I can't speak for CERN, but most datacentres won't use liquid cooling for the reason that in the off-chance that one leaks, it can be disastrous for the entire data. The chance of one doing so is very small, however when you have hundreds of these, the probability is obviously much higher. There are also other methods that are more cost and power-efficient than liquid such as machines with air-suction (meaning no chilling involved).
They aren't very common in consumer production, because they are ridiculously expensive for consumers. Normal consumers have 2-4TB hard drives, one of which costs less than $200, so why would they spend more than 10 times than that for a backup medium, when they could just buy additional hard drives to back up their data? You need a lot of data to back up before that starts making sense!
I work on this kind of infrastructure for a living. It might look cool, but when you spend a lot of time in rooms like this you often get sick due to the extreme temperature differences, dry air, etc. The noise level is often very high. Brady, the reason why they have then in aisles like this is simple. Cold air gets pumped into the aisles (slight overpressuren usually through the floor), goes through the equipment, and then gets vented out the back. Extractors then pick it up and vent it out.
That's either some next level physics or you're trolling me, i tried to research that on the internet but do not understand it :( but then again I'm in my second year of college doing physics where we still learn about newton's laws of gravitation -.- and i know quite a lot about modern physics, hence why I want to do theoretical physics at uni ^^
Yeah, and if you haven't noticed, the room is HUGE. Moore's law works to exclusively either halve the size, double the power, or halve the cost every two years (given, for each, that the other two factors stay constant). Consider, also, that the cost of this whole installation must be insane. And finally, 2^15 is 32768, not 1024. 1024 is 2^10.
Unless quantum computing advances, we're basically stuck right where we're at.... The transistors that we use today can't get any smaller. I also vaguely recall hearing something about using lasers to make a material temporarily conductive, and how that sort of technology could be used to make processors with performance measured in petaflops
It doesn't list it as actually acceptable, it lists it as an abandoned error in history. There is a difference. OED lists philome, for fucks sake. When was the last time you ever saw philome spelled that way? Never? Oh, maybe you read Shakespeare once? Because I'm pretty sure Shakespeare was the only person to ever spell it that way. Everyone else spells "film."
cool, so you finally agree that the OED says Mice is acceptable. So between that and common usage my point is made. Mice is correct and acceptable. Society lead and the OED followed, you stragglers who are so OCD about a back set of rules and compliance will get there in time or just die out and the argument will be forgot and won by default. Interesting discussions, bye.
That sounds familiar, it sounds a bit like a sentence from 40'-50' about computers in general. I think it is now proven that nobody can imagine even contours of society and technology 50 years ahead (maybe even 20 year ahead). So your statement can easily become as true as BillG’s "640K software is all the memory anybody would ever need on a computer"
Just because we can't currently envisage a need or use for the computational methods used by quantum computers it doesn't mean to say they won't come forth and find a use in a consumer environment. We need to keep an open mind and a close eye on it because if a use is found, that is what will drive down the cost of quantum computing.
The computing power of today's smart phones was a supercomputer 23 years ago. I'd be unsurprised if your 2036 handheld device had this much total computing power, although I expect it will only run at a slightly higher clock speed than your current phone and 99+% of the power would be in parts more like today's GPUs than today's CPUs.
honestly, not trying to be a debby downer. But you would need an efficiency increase of another 10 magnitudes. If Moores law lasts, probably about 35 years. But that level of scaling would require transistors that are subatomic in scale. Not saying it won't happen, but It could very well be more like 35000 years.
Hi Brady, I noticed that you say 'filming' although you don't use any film. I understand that it is a figure of speech, and it must be like the save icon with a floppy disk even though we don't use floppy disks any more. Do you think people will every stop using the term 'filming' and use another word like 'recording'?
Brady, the glass corridors are indeed for cooling. They call it hot and cold Isles. You see that all computer fronts are facing the cold (covered isle) Cool air is blown in there from below the floor. All fans in the computers are so arranged that they suck in cool air from the front, and they blow out the hot air at the back.
With the amount of processing power, you could just emulate a graphics card. If you've ever done CPU stress-test to get a score, there is always a part where they use the CPU purely to render 3d graphics, they aren't great quality, but it can be done nicely with a quad-core @ 3.0ghz per core. Now imagine the amount of processing power available there.