As an Organized Research Unit of UC San Diego, the San Diego Supercomputer Center (SDSC) is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s Comet joins the Center’s data-intensive Gordon cluster, and are both part of the National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) program.
Honestly, Santa Monica and the Westside looks to get hit much worse than the SF Valley where I live. And Palm Springs? Forget it. I don't understand why people live in Palm Springs. It's a mercilessly hot hell hole. not to mention sitting right on the fault. Hey, we were only a couple miles away from the epicenter in the 1994 Northridge quake. I don't think it's gonna get much worse than that! What's amazing is the Andreas quake takes more than a minute to reach the city of LA. That's a very long time. Oxnard and Ventura also seem to take a significant seismic hit.
We live in Los Angeles, so a 7.8 IS FRIGHTENING ENOUGH! IF I’m this simulation video the 7.8 can be shown how far it travels, I just wish it would also give the estimate magnitude each area would get hit with.
Galelyo is amazing, I always wondered why my cluster notebooks are in HTTP and not in HTTPS. This session was really good, thank you. And Yes, I too save commands that work. Indeed live demos fail most of the time. Thanks again.
0:04:00 Expanse: System Overview 0:33:30 Running Jobs on Expanse 1:17:08 Managing Your HPC Software Environment 2:27:32 Managing Allocations and Charging 2:44:45 Expanse User Portal 2:57:30 Interacting Computing and Running Jupyter Notebooks 3:22:00 Data Storage and Transfer
To be honest, I don't entirely understand why ISPs are allowed to be ran for-profit by private entitites. They're basically the roads of the Internet and should be provided at cost.
Salton Sea Right? It went off again a couple days ago 10 months later. Its actually the volcanic area there that has mud volcanos that's acting up. Were likely fine hopefully.
Internet Service providers were never considered "common carriers." I worked as a senior staff member at the FCC and contributed to the Computer Inquiry Reports and Orders that defined the use of those terms. Whatever your view about the topic, the SDSC statement is not accurate.
Very good video, thank you! However I replicated exactly the same example as him for the groupby using Numba and it's still much slower than Pandas groupby... I don't understand how he got it to quick for the njit version... My groupby took 3 seconds while njit takes 14 seconds... so much slower...
OK I noticed I had done differently, I had created a DataFrame and not a Series. I am surprised by how much difference this makes! Basically if you create a DataFrame instead of Series, it makes it much slower. I would understand for the groupby of Pandas but I don't understand why this impacts the NJIT version !? Nowhere in the function we specify whether it is a DataFrame or a Series since we pass the numpy arrays to the function and not the Pandas objects... So how come Numba is much slower? How does it know ???
OK... I reply to my own message... Numba will make the difference between whether it is DataFrame or Series because at some point we use np.zeros_like(m) ... Basically if we use DataFrame then it will create a DataFrame for the output (output in the function)... which slows the function a lot compared to when output is a Series. So if you have a DataFrame at the beginning, you can just replace m_numba = np.zeros_like(m) by m_numba =np.zeros(len(m)) and it will be fast.