Тёмный

24 Thread PowerEdge R710 with 2xGPU PCIe x16 Gaming, Video Editing, Super Server for under $900 

Подписаться
Просмотров 43 тыс.
% 738

How I modified a 24 Thread (Dual Xeon X5660) Dell PowerEdge R710 PCIe x8 server to work with a PCIe x16 nVIDIA GTX 970 and an nVIDIA Quadro K4000 at the same time for a low power, super Gaming, Video Editing Compute Server for under $900.
The build materials:
1. 1x PCIe 6pin male to 8pin male GPU power connector cable
2. 2x PCIe 8pin female to 2x 6+2pin Male GPU power connector cable
3. Flat tip soldering iron
4. Wire cutters
5. ~2inch X 2inch scrap straight-edge PCB board
6. 1x dual slot low power GPU like the nVIDIA GTX 970
7. 1x single slot low power GPU like the nVIDIA Quadro K4000
8. 1x low power, high IOPS server like PowerEdge R710
The PowerEdge R710 is a low power, general purpose server that has a high IOPS architecture meant for multi-user, balanced data & compute performance but lacks GPU support necessary for additional compute intensive gaming, CAD work, and other intense mathematical calculations.
The nVIDIA GTX 970 is a dual slot, low power (TDP=145W), high performance (1665 CUDA Cores) GPU. It's currently assigned to Win10 VM running on CPU 2/14, 3/15, 4/16, 5/17.
The nVIDIA Quadro K4000 a single slot, low power (TDP=80W), high performance (769 CUDA Cores) GPU. It's currently assigned to Win10 VM running on CPU 8/20, 9/21, 10/22, 11/23.
Total observed max power is 285W when all GPUs are running in 2 separate Virtual Machines. The server is left on 24/7 and idles at about 150W. It's currently servicing Nextcloud, SSL, 2 Video Editing VMs, 1 Windows Office VM, 1 Ubuntu VM.
Here's a link to a diagram of this use case.
nextcloud.fusionconfusion.net/index.php/s/y2H6xbD93EmtcP7
The link goes to the NextCloud server that is running on this very server build you're watch.
The 2 Video Editing VMs can also be used for high performance gaming servers. I haven't done this yet because I don't own any high performance PC games. If someone wants to donate a game for me to try, that would be awesome!
[Future]
Since this initial project, I’ve developed a new, faster method to notch the PCIe slots using a router and to tap PCIe power using solder-less blade contacts. If you’re interested in this modification technique, put down a comment on what you want to know and I'll find some time to put together a video. Additionally, I reduced TDP to 273W and gained about 700 G3D compute power by swapping the Quadro K4000 with a Quadro K2200.
Thanks go to “SpaceInvader One” for unRAID tutorials. Thank you, Ed!
Thanks also to the folks at LimeTech for bringing unRAID! Best overall reliability, stability, flexibility of any RAID system.

Наука

Опубликовано:

 

17 июн 2019

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 218   
@shaunwhiteley3544
@shaunwhiteley3544 5 лет назад
New subscriber, got you into double figures 😀. With content like this it’s going to get a lot higher! Cheers
@IEnjoyCreatingVideos
@IEnjoyCreatingVideos 4 года назад
Great video Pete! i really enjoyed it :) Thanks for sharing it with us.👌👍😎JP
@ThePoot_tf2
@ThePoot_tf2 4 года назад
Love this video, I love virtualization and I was actually planning the parts out for the r710 but the forgot I needed pcie power. This is perfect thank you!!!!! Grained new subscriber!
@PeteDiful
@PeteDiful 4 года назад
Thanks, Poot! I was looking to replace an old Dell Windows Server with affordable modern server that can also do video editing. Accidentally stumbled upon R710 2x6CPUx2Thread (mainly because it was cheap, 10x cheaper than my old server new) and puzzled at so much compute power but useless to me without graphics. Had to do something to free up all that power. Been lov'n it more than 1/2-year later. Server has been running 24/7 w/o interruption doing NextCloud, 2xWindows VMs, dual high performance video editing VMs, and now Web Server Farm for $0 and server hardly sweating! Totally replaced old server, saved tons of $$$ in long run & totally self contained. All my older office laptops are now remotely logging into server to get work done way faster. Converting all my staff to remotely logging into VMs on more R710 builds.
@aantonis
@aantonis 4 года назад
Great video. Answers a lot of questions that many people have all over the internet. Good Job. Please tell us more on your blade tapping workaround!
@PeteDiful
@PeteDiful 4 года назад
Thanks, Antonius! Good to hear. Please check "Thomas Tomchak" comment thread for blade tapping solderless connections. These Dells also come with another type of non-slotted PSU backplane rails. So if there's a metal bridge in the PSU slots where you can't slip in a spade contact, the spade/blade contact technique work is actually easier in that instead of rippling the spade, just clip the spade halfway horizontally to the middle to create a "biting" notch. Then simply push the spade "biting" notch onto the bridge of the PSU slot. One can also use different types of connector types (e.g., round ring) for these bridged PSU slots. Sorry I haven't had the time to put up videos on this. Hopefully will soon. Good luck!
@Das0ren
@Das0ren 5 лет назад
Thank you so much for this video. This has helped me so much. You are a life saver.
@PeteDiful
@PeteDiful 4 года назад
Thank you, Dasoren! Means a lot to me to know that video was useful and what direction I should go next.
@royasm8964
@royasm8964 7 месяцев назад
Nice job! Just built a new r710 for my daughter and hoped it would be able to do their video and photo editing. Looks like you answered that for me! Thanks so much!! Happy New Year!
@SpaceinvaderOne
@SpaceinvaderOne 5 лет назад
Great video Pete. Very interesting and very well put together.
@PeteDiful
@PeteDiful 5 лет назад
Thanks for the support & encouragement, SpaceInvader One!
@knewsom
@knewsom 4 года назад
Just ordered a R710 with a RAID card and 32 Gigs of RAM to build a workstation at home with - I've already ordered 870w PSUs and 3.33Ghz X5680s to upgrade the unit, will increase the RAM at some point soon, for now 32 is enough. I already have a GPU I can integrate, I'll be using the riser closer to the power supply, and run power from the rail like you did. Thanks so much for putting up this video and encouraging me to go this route - now I can have an entire dedicated workstation and server for only a little more than I was going to spend on just a new RAID enclosure! I might try a basic modification to the riser card to ensure the GPU does not become disconnected.
@PeteDiful
@PeteDiful 4 года назад
Thanks, Kristoffer! You're going to have a lot of fun with it with a pretty powerful, beasty RAID/NAS with VM machine. Let me know how it goes. Good luck!
@ikkuranus
@ikkuranus 4 года назад
I'm going to be modifying a 1x slot in my upcoming nas build to fit an HD Radeon 5350 and leave the bigger slots for HBA + 10gb nic. I'm probably going to just use a Dremel to cut off the end since I no longer have a functioning soldering iron.
@Mondian
@Mondian 2 года назад
Nice video, thanks for sharing. Damn shame you haven't found time to put out any more videos after two years. We're still waiting on that wiring one!
@TheOther9519
@TheOther9519 4 года назад
Excellent video. Hope you made a follow up too!👍
@tyantopriyatno127
@tyantopriyatno127 4 года назад
Thanks for sharing
@Trick-Framed
@Trick-Framed 10 месяцев назад
I bought my R710 from a friend and it came with pre-notched x8s and also a single x16 board. Either can be placed in the same socket. To this day I have not gotten that server up and running. I have more memory sticks, HDDs and extra fans and PSUs than I need to. He gave me a half height rack with it. Nice unit with wheels and a solid chassis. Also gave me enough PoE and standard Switches to choke a horse. It came with a pair of X55xx Xeons and I bought a matching pair of X5660s to replace them 3 years ago and still have not placed them in. This video may inspire me to actually use the server as this very project was the first thing I wanted to do. See if I could run an instance of Windows 10 and a standard gaming GPU as a desktop. If that worked i'll want to get a newer server with faster cores so I can do that AND run a slew of other programs in the background. From FreeNAS to some sort of DHCP server to a firewall. I want it all to run and have some room left to run a game server.
@PeteDiful
@PeteDiful 5 месяцев назад
Very cool. Sounds very exciting! Let us know how everything went.
@alexisguerrero7551
@alexisguerrero7551 Год назад
Amazing!
@Ryan-qt9jm
@Ryan-qt9jm 4 года назад
impressive!
@agentalpha92
@agentalpha92 2 года назад
I'm very interested in how to add solderless PCIE power contacts to an IBM x3650, which already has full length PCIe 3.0 x16x slots on risers. Do you have any videos showing the technique?
@TeenyTinyMoose
@TeenyTinyMoose 4 года назад
Nice work. ETA on when you might show off your blade contact technique?
@spikeywayent
@spikeywayent 4 года назад
Nice Work here!!, I interested to see the solder-less blade contact method.
@attiii
@attiii 3 года назад
Hey Pete! Great video! I have an r610 I am trying to add a 2x 8pin GPU (Radeon 6700XT)...will this power splice technique work for up to 300W? Can I safely split the 6pin power into 2x8pin? I've got an extra PSU for now, but would love to consolidate this down!
@davidwiles6042
@davidwiles6042 Год назад
I know it's been a few years since you posted this video, but it really looks like it could be the answer to my problem. I want to put a quadro P4000 in my R710, and have been trying to find good ways to power the card for a couple of weeks now. It looked like connecting directly to the PSU was the answer, but I couldn't find any instructions for doing this. You mentioned in the description that since you posted this you also found a way to tap the power without soldering, do you have details on how you do that? I'd rather avoid soldering anything in my machine if I can manage. By the way, Ithink you mentioned it here, but it is possible to get a PCIe 16x riser card for the R710 instead of cutting the back of the 8x slots. That should at least allow you to put one card in with minimum effort.
@blackschannel1
@blackschannel1 4 года назад
Hello good afternoon, very good channel! Maybe you can help me, I have a dell poweredge R210 II and I add a video card, a geforce gt 1030 and I can't get it to raise an image, I have a fixed prompt at the top left and that's it, nothing else, the rest is screen in black. For more data I tell you that it was working with an old ge force 8500gt board and it worked correctly. Another fact that can help is that a friend has the same server and I add a geforce gt 730 and it works without problems ... Any idea what is happening ?? Thank you very much in advance !!!
@nicholasreynolds6609
@nicholasreynolds6609 Год назад
Do you have a pic of the spade connector-type setup?
@jjnotme1227
@jjnotme1227 3 года назад
You could use PCI risers and an external power supply, in theory, to do this without the modifications. Don't use the risers with this soldering method shown in the video though, it would be unsafe from my understanding.
@jch77
@jch77 Год назад
could you not have gotten some gpu compatible (power and all) riser cards off of ebay or something?
@diymadness747
@diymadness747 4 года назад
Hey Pete, thanks for the great video! I was wondering, what happens with the GPU fans airflow after you close the server lid? On the video looks like there is no space between the lid and the GPU fans. Since you run that for very long time I guess the GPU's are not overheating...
@PeteDiful
@PeteDiful 3 года назад
Sorry for delayed response. Google moved my RU-vid feedback comments to "Social" mailbox, which I never check; and I've been busy with other projects (69GByte network storage transfer to SSD!) since to be checking on this hobby thread. You probably figured things out by now :( Great, observant question! Honest answer is I'm not sure but guess that the GPUs (& other cards) will restrict the 5 main server fans' airflow, while the GPU fans will "suck" air through the ~1/2inch gap with lid, and all that forced air goes out to the rear of server. My observation is that the R710 airflow design is superior with a forced airflow channel from front to rear with the PCIe Riser cards arranged inline with airflow to minimize flow obstruction. Generally the overall server/CPU temps are about ~32C, and things run really good. The server automatically throttles performance down if temp gets too high, which I have experienced on rare occasions when a particular server is placed in ~110F environment.
@skyrunner021
@skyrunner021 5 месяцев назад
I'd love to see new techniques
@prashanthb6521
@prashanthb6521 2 года назад
Nice idea. Another idea is to use a hot knife heated on a stove.
@welbo9766
@welbo9766 4 года назад
Thank you for the video. I'll be trying to make an R710 work with a GTX1060. I tried a GT730 and the machine wouldn't boot. I didn't work on it very long though. I like your power supply mod. Getting power out of the R710 is the hardest part. Using the blade connectors sounds like a more elegant solution although I am worried about them vibrating loose. What do you mean by "ripple spring"? If I were to guess, it sounds like you are bending the blade connector in half along the long edge and forcing it into the power rails. Do you feel like it is a secure fit? 150w idle! Wow! Mine idles around 285W. Probably because of the 6 2TB SAS drives. Unraid is a nice system. I'm running Proxmox 6.1 for a VE. I really like it but it took a lot of tweaking. Thinking of setting one of these up for a friend and Unraid looks a lot more hands off. One benefit of Proxmox is the LXC containers can share the GPU's. Looking forward to any more R710 videos. Thanks again.
@PeteDiful
@PeteDiful 4 года назад
Thanks, welbo97! If the PSU back plane is the slotted type (not the bridge type), then making spade/blade ripples like a FLATTENED/GENGLE letter 'W' or 'N' when viewing spade's side profile with just enough ripple height to wedge securely into the PSU slot works very well. My spade connections are quite tight and requires pliers with strong pull to unseat the connections. I've also shipped these servers with these connectors overseas without any issues. If your PSU back plane are the bridge type, it's even easier as all you need is to either simply wrap wires around the "bridge". You can get fancy with spade connectors and nip them in the middle to create a biting niche where you attach to the "bridge". Sounds complicated but super easy. Wish I have the time to get these mod videos up to show how easy it's done.
@gliandermarin3508
@gliandermarin3508 4 года назад
Pete, thanks for this amazing video, was very helpful!! I have a little doubt. There is any way to disable onboard GPU?
@PeteDiful
@PeteDiful 4 года назад
Hi Gliander. I don't remember needing to disable onboard GPU. When starting out, view screen via built-in VGA or iDRAC console. For the modern nVidia (& some ATI) GPUs that I use, the Windows driver installation process should recognize the GPUs and complete the process. If Windows GPU driver installation is not successful, then something wrong likely with HW & connectivity. I don't know how Linux or other OS drivers do, so can't comment on their process. Once Windows driver installation is successful, reboot with monitor connected to PCIe GPU. Should be golden here. I do everything headless (no monitors) at first until GPU drivers are done. I remotely access either via iDRAC, VNC, or Avocent remote KVM to do GPU driver installation. After drivers are done & reboot, the remote connections to OS will have control of PCIe GPUs, where I can configure it for higher resolution. No disabling of onboard GPU. IMHO, it's a good thing to have onboard GPU available in rare case you need direct access to terminal. This is especially true for me when I was new to these things and needed simple, direct way to debug. I have full remote capabilities so never since used the built-in VGA. Good luck!
@d96634
@d96634 3 месяца назад
wow i would love to do this now but need to see what hardware to use now as this is quite old
@centaurs63
@centaurs63 4 года назад
Are you able to show this server running any kind of modern games? I am thinking about copying this build. Thanks for this how to it is really helpful
@PeteDiful
@PeteDiful 4 года назад
Thanks, La Var Blalock! Yikes - modern games! Sadly, LVB, I don't have any modern games. Just modern video editing tools. Depending on the game's IO throughput requirement, as long as PCIe 2.0 x4 or x8 suffices, then I don't see any issues with the "modern games". These Dell servers, Xeons & GPUs are spec'd to do exactly what they're meant to do. Obey the force and things just work. :)
@robertcsims
@robertcsims 4 года назад
Thanks for the video, I have a few of these R710 and will be doing this upgrade. In the video, the graphics card showing in the OS was the 970 a 4GB card. Are both the cards used at once? The K4000 is 3GB, would that be combined? Was the second card seen by the OS? I will be looking for the updated power cables based on your replies in the comments. Thanks again.
@PeteDiful
@PeteDiful 4 года назад
Thanks for watching and commenting, Robert! Sorry for delayed reply. I've been swamped lately. Yes, both cards (or 3 GPUs as in most of my other server builds) are running simultaneously. Each GPU dedicated to a VM. I haven't tried multiple GPUs to one OS yet. Technically it's do-able because server platform is built to do this. Sounds interesting? What's the application?
@OmarSan2k7
@OmarSan2k7 4 года назад
Wow Great Video! Could you please explain your new method for the PCIe notch and the solderless contact pins ? Thank you ! Great Job !
@PeteDiful
@PeteDiful 4 года назад
Thanks, Omar! For solderless contacts, please search comments by "Dsdj Antonius" & "Thomas Tomchak" for descriptions. Good luck!
@micheledimauro1282
@micheledimauro1282 4 года назад
good job!
@PeteDiful
@PeteDiful 4 года назад
Thanks for comment, Michele!
@Night__Rider__
@Night__Rider__ 3 года назад
did u get any error 43's with the gpus detecting their in a vm?
@muhammadidhar877
@muhammadidhar877 2 года назад
will there be a problem with the cable? like burning, short circuit?
@ArtofServer
@ArtofServer 5 лет назад
Thanks for sharing your modifications of the R710! This was one of the best PCIe slot opening techniques I've seen. Would love to see another video about your "improved" technique for that process, as well as the blade contact for tapping into the PSU. BTW, since it looks like you only tapped into one of the PSUs, does that mean the GPUs do not have redundant power? Will they power off if you pull that PSU or will the circuit be powered by the other PSU as well? BTW, I've figured out how to get the H200 with LSI IT mode to work in the integrated storage slot. Same technique would also apply to H310 as well, but I think the H200 is a better fit. If you used the integrated storage slot, you might save yourself a PCIe slot, or be able to use a double slot GPU in the Riser 1 location. See here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-v0AEHVdc_go.html
@PeteDiful
@PeteDiful 5 лет назад
Thanks for kind, positive comment Art of Server! unRAID and modifying things are fun hobbies of mine when I have limited time to relax and play with them. Since it also seems to bring excitement and interest to you, I'll find some time to put together a quick video on improved modification techniques as alternate options. The R710 has redundant PSUs that have their power rails ganged together in the PCB. So whichever PSU socket you tap from for your PCIe GPU power cable, the GPUs will always get power whenever any one of the PSU is supplying power. Thanks for the nice tip on SubsysPID=0x1f1e for H200/H310 to work in Riser1 storage slot! Definitely can use the extra middle slot for NVMe SSD or 3rd GPU!
@RyanDavis87
@RyanDavis87 4 года назад
If the power was run in parallel from the second power supply, this would create a scenario where it would survive the loss of a single PSU. There are some techniques that could be used involving micro switches or fast switching relays which would be safer than connecting the two power supplies together (not recommended to connect them together in the same circuit) .
@somerandomguy1533
@somerandomguy1533 4 года назад
​@@RyanDavis87 This means a really big fire in your house.....
@abyssalreclass
@abyssalreclass 3 года назад
I feel bad now, I bought a passively cooled GT1030, which (if you get the crappy DDR4 variant) fits natively into an x8 slot and falls under the power limit for a slot. Performance leaves something to be desired, but it lets me offload long term CUDA stuff to it.
@PeteDiful
@PeteDiful 3 года назад
I'd rejoice if the SW can fully max out all those beautiful CUDAs :) Thanks, AbyssalReClass!
@mitchellcrane9809
@mitchellcrane9809 4 года назад
In the case of say video editing or rendering does a workstation card still do better than a gaming card ? What was the reasoning behind putting both a gaming card and a workstation card in this server build?
@PeteDiful
@PeteDiful 3 года назад
Sorry for delayed response. Google moved my RU-vid feedback comments to "Social" mailbox, which I never check; and I've been busy with other projects (69GByte network storage transfer to SSD!) since to be checking on this hobby thread. You probably figured things out by now :( These servers were purpose built. I.e., this particular video's build was for office workers to do some medium duty marketing videos on the gaming GPU, while the workstation GPU card was for heavy video & computational work. I have this popular build for general purpose video & computing: "Riser 1" = {Quadro K2000 , 1TB M.2 NVMe SSD (HUMUNGOUS $L4) ~2GB R/W caching, HBA controller} ; "Riser 2" = {Quadro K2000, Quadro K2000}.
@ThomasTomchak
@ThomasTomchak 5 лет назад
What else can you tell me about the solderless blade contact that you mentioned at the very end of the video? I’m very curious in checking that out, but not exactly sure what I’m looking for. Thanks. Enjoy the video.
@PeteDiful
@PeteDiful 5 лет назад
Thanks for feedback, Thomas! I now use blade connectors that you crimp to wires. After connector is crimped to wire, i also crimp the flat blade to give it some bulk & ripple spring to it such that the blades can firmly be wedged into same rails that i would have soldered. Make sure there's tight wedged fit that connector won't go loose.
@r.paulnobrega3384
@r.paulnobrega3384 5 лет назад
Subscribed... Very interested in the video on this. Amazon links to supplies would be awesome.
@PeteDiful
@PeteDiful 4 года назад
@@r.paulnobrega3384Thanks, R.Paul! Only need 6 spade connectors. This link goes to 10pairs, cheapest I can find on Amazon: www.amazon.com/16-14AWG-Electrical-Connector-Insulated-FDFD2-250/dp/B07MKCN1W3/ref=sr_1_2?keywords=spade+crimp+connectors&qid=1567180979&s=sporting-goods&sr=1-2 I personal use ones with heat shrink sleaves: www.amazon.com/Mercury_Group-Connectors-Terminals_10Pcs-Waterproof-Terminators/dp/B07RRGY8SN/ref=sr_1_5?keywords=spade+crimp+connectors&qid=1567180979&s=sporting-goods&sr=1-5 Trunk PCIe GPU Power cable. I cut in half & just use one end. Other end is spare for another build. Sometimes out of stock. www.amazon.com/gp/product/B078WNKT4F/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 PCIe GPU Power cable splitter. Can work as trunk but has shorter wires than one above. www.amazon.com/gp/product/B07DP3Z53L/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 I'm working on new solderless connection video. Hopefully video will be available soon, as time permits. There are a few variations to the Dell PSU contacts that I want to address in video.
@oceanyt8
@oceanyt8 3 года назад
Hi thanks for the video. Can I ask is it worth to invest and convert into gaming server ? I have an old unit but does it performs better than the normal desktop ? I'm using Dell optiplex 7020 currently. Any big difference when it comes to gaming performance or overall usage ?
@Jabe_VeX
@Jabe_VeX Год назад
If you're using an optiplex rn than that's completely different from what's shown in the video, what's shown in the video is a server, with 2 server proccessors in them and ECC memory, it also supports up to 384GB of RAM, I've looked up the specs for your optiplex 7020 and it's only consumer hardware with support up to 16GB of RAM, there's prolly nothing wrong with whacking a cheap GPU into your optiplex to make for a good but cheap gaming pc, I recommend the 3050 or 6400, but don't mix up this server and your optiplex, they have radically different hardware
@athanastrein6347
@athanastrein6347 4 года назад
Hi Pete! Thanks for the extremely informative video. Question: I just replaced an RX580 Sapphire Pulse from my main PC and would like to adapt it into my R710. Managed to tightly fit it inside Riser 2 and currently looking into pulling the power supply according to your video. I have an 8pin male to 6+2pin male cable lying around which came with my main rigs' PSU and I'm wondering if I could use that instead? Or would that be incompatible considering the R710 pins are 6?
@PeteDiful
@PeteDiful 4 года назад
Thanks for feedback, Athan! (Cool name by the way) Yes, the 8 pin male to 6+2 pin male will work. Either ends can plug into the RX580. The +2 lines are connected to ground. I have a setup similar to yours with an RX580 & a Quadro and use the same technique as in the video. If your cable reach is just long enough and you don't have an 8 line extension cable like I do (as in video), then one end of connector will need to be sacrificed, preferably the 8 pin end to leave the 6+2 end available for flexibility in using on a 6 pin GPU in the future. Good luck! and let me know how our goes.
@athanastrein6347
@athanastrein6347 4 года назад
@@PeteDiful Hey, greatly appreciate the prompt reply! So on the PSU end I'm left with 5 ground cables for 3 "grounding pins" on the motherboard. Do I leave 2 dangling or do I just solder all 5 to ground (i.e. 1/2/2)?
@PeteDiful
@PeteDiful 4 года назад
@@athanastrein6347 The +2 wires can be ganged with any of the other 3 ground wires to make total of 5 ground lines. So 5 ground lines & 3 12volt power lines. I believe some GPUs use the +2 ground lines to sense that external PCIe power is present.
@athanastrein6347
@athanastrein6347 4 года назад
@@PeteDiful So! Went with some male spade connectors instead of soldering and somewhat kinked them so it's easier to wedge as you recommended in another comment. Works like a charm! paste.pics/8YE1N Had to saw off part of riser 2 and remove the plastic "cover" from riser 1 in order to get a hint of tension from the components as the 580 was a tight fit but everything worked eventually. Thanks for your help!
@PeteDiful
@PeteDiful 4 года назад
@@athanastrein6347 Exactly! Thanks for the feedback. So happy for you that you got everything plugged in and working.
@CBRHerms
@CBRHerms 4 года назад
Thanks for this. Very useful info. Modified mine (soldered to PSU rails rather than use terminals), though did not check GPU clearances of my spare GTX 960 strix I had lying around. What have other people been putting in theirs? Just need an idea of what will fit. Planning on passing it to a VM for use with Plex (with Nvidia limits unlocked).
@PeteDiful
@PeteDiful 4 года назад
Thanks, Adam! Sorry about Strix not fitting. I had similar problem once too. Pretty much any GPU that doesn't have those fancy or wide shrouds/fans would work. Check out the EVGA GTX 970 dimensions. On a few GPUs that were a bit to wide, I either slightly trimmed the shroud or remove the Riser metal fencing to make room. Sometimes there is a way to change a little with the GPU and/or server to make things work. But I admit, server boxes have very tight, exacting dimensions that you'll need to seriously consider when fitting anything inside, unlike PC towers that can fit a football inside.
@CBRHerms
@CBRHerms 4 года назад
@@PeteDiful indeed. I ended up buying a longer, thinner gtx 1070 that fitted once I chopped off the metal fencing. Been running a couple of months with no issues.
@PeteDiful
@PeteDiful 4 года назад
Bravo, Adam! Really nice to hear this. My servers been running 24/7 for about a year now without interruption except for the one PG&E shutdown on 10/27/2019. I had servers on UPS with UPS app running. So servers gracefully went to sleep during the few hours shutdown and gracefully powered back when power was restored. Yesterday, I finally upgraded unRAID 6.6.6 to 6.8.2 and had to reboot the servers for the first time in nearly a year of flawless operation.
@jj-icejoe6642
@jj-icejoe6642 4 года назад
PeteDiful Electricity bill ?
@brucetraudt1571
@brucetraudt1571 2 года назад
great job but does it perform if you have users under a VM??
@thegorn
@thegorn 4 года назад
Oh dang you dun melted your server!
@RoyalShak89
@RoyalShak89 3 года назад
Hey PeteDiful . Tnx for the video ! I've two R710 servers and HP ProLiant DL380P at my home lab and i cant make it work with even 1 GPU [ Nvidia 1650S and Nvidia Quadro P2200 ] . i am getting in the bios of those servers in the PCIE slot " Nvidia Corp VGA adapter" or " Unknown PCI " , what should i do ? i am also using external PSU for the 1650s..nothing works .
@RoyalShak89
@RoyalShak89 3 года назад
Picture for one of the configuration i made " cdn.discordapp.com/attachments/793579395667197962/820420699452866560/unknown.png "
@csgrullon
@csgrullon 4 года назад
Would be nice to see a macos vm running on this server
@ThomasBeringerMichael
@ThomasBeringerMichael 4 года назад
I would love to hear about your solderless method to tap the power supply rails. If you could drop me a line about how you did this I would Greatly appreciate it!!!
@PeteDiful
@PeteDiful 4 года назад
Thanks for watching and commenting, TB! Please check out my replies to "Antonius" or "Thomas Tomchak" about solderless connections - it's pretty easy. But, man, I've been swamped lately with work and really wish I can get a video out on how to do this just to inspire others like you. In the meantime, probably can put a link up to some pix. Good luck!
@Germantwinov
@Germantwinov 4 года назад
Hi! Thanks for sharing! R910 + half length Nvidia GeForce GTX 1660 Ti looks promising. But why X5660 not 4870?
@PeteDiful
@PeteDiful 4 года назад
Thanks, Pavel! You mean 10core/20thread Xeon E7-4870? By all means, yes, if your power, IOPS, application needs are met. The 1660 Ti would add so much more compute power to the lucky user/VM that gets assigned.
@seaham3d695
@seaham3d695 2 года назад
I have one of these right next to me, its getting MODDED!!!! (edit) its been modded and has a radion in it, works fine, just disable default display in the bios and it will work.
@iTzStick
@iTzStick 3 года назад
2:12 I've been searching for those cables forever but i can't find them anywhere. Is there another solution like 6 pin to 2x 8 pin or is the cables not powerful enough?
@PeteDiful
@PeteDiful 3 года назад
If handy with wires, just buy the needed connectors and make your own cables. Cable wires should be 14gauge or thicker.
@Pocokcic
@Pocokcic 6 месяцев назад
Nice video. I am wondering now what CPU could be better than x5670. Obviously the x5680 or x5690, but those are overpriced. Maybe the w3670 if the motherboard can be compatible.
@PeteDiful
@PeteDiful 5 месяцев назад
Yes, for faster processing power on the R710 (11th gen) platform, which uses the LGA1366 CPU sockets, 6 Core X5680 (12M, 3.33 GHz) and 6 Core X5690 (12M, 3.46 GHz) would do the trick. The w3670 is spec'd to use the FCLGA1366 socket, which is the same socket as LGA1366. I don't have any experience with w3670, but would say that it should work in the R710 LGA1366 socket. Last I checked, the X5600 series 6 Core CPUs are far cheaper than the w3670. Unless you can get w3670 cheaper than the X5600 series, the X5600 series are better on the wallet, yet still have darn good processing power for the buck, the 2 main points of this exercise. I'm sure many on this thread would love to know how the w3670 goes with the R710. If you do go with the w3670, please let us know how it went. Thanks! R710 Technical Guide: i.dell.com/sites/doccontent/business/solutions/engineering-docs/en/Documents/server-poweredge-r710-tech-guidebook.pdf
@benstyles8494
@benstyles8494 5 лет назад
Long way around to do a simple job. Could have just used the factory option... Dell R5500. Quieter, boots faster and supports full sleep mode along with correct slots and power.
@PeteDiful
@PeteDiful 4 года назад
Absolutely. If you already have something built like the R5500, then there's absolutely no need to modify. Only need to solve the problem at hand.
@Victor-E-
@Victor-E- 4 года назад
Very interesting mod. How many pcie lanes does the r710 support? What about getting 4 GPUs installed, 2 inside and two using riser ribbons running outside the case? If there are enough lanes, would this be doable?
@PeteDiful
@PeteDiful 4 года назад
Yes, 4 GPUs installed internally VERY doable. Riser1 has 3 PCIe 2.0 x4 slots. Riser1 configuration: (top) Quadro K4000/K2000/K2200; (mid) Quadro K4000/K2000/K2200; (bot) HBA drive controller for unRAID drives. Riser2 has 2 PCIe 2.0 x8 slots. Riser2 configuration: (top) Quadro K4000/K2000/K2200; (bot) Quadro K4000/K2000/K2200. Basically can install 4 single slot GPUs (without ribbon adapters), which is what the Quadro's in example are. Fits perfectly & still good air flow. I have 2GB/s NVMe SSD in Riser1 mid slot. Losing this lightening speed cache drive would be the downside to having 4 GPUs though. Also GPUs in Riser1 would be limited to PCIe 2.0 x4 IO speeds. If your application isn't IO speed dependent and more GPU compute intensive, then Riser1 GPUs work fine as in my case. The best configuration for me would probably be 3 fast Quadro GPUs & NVMe SSD cache drive. I've done 3 GPUs already with 3 VMs simultaneously pegging away at its dedicated GPU. Charming, cute beast and reasonably affordable. I don't have any games but was asked to try. I'm guessing not good for graphics games because they're IO intensive and the PCIe x4 slots would be the bottleneck.
@Victor-E-
@Victor-E- 4 года назад
PeteDiful i snagged an r720 last week for under $200. The pci slots are full length... and there are 7 of them ;)
@Victor-E-
@Victor-E- 4 года назад
Can confirm that the small riser (slots 1-3) will work fine with a PCIe extender cable alongside an internally mounted card. And those are 8x on the 720 :-) Windows can see both GTX cards and both are utilised for cuda compute over the network. However I got a hard fail when trying an RTX 2070. The 720 halted on boot, with a 'reseat card in PCI slot 4' which is the main internal one.) Need to figure out the issue there.
@PeteDiful
@PeteDiful 4 года назад
Sweet deal. Would love to get my hands on R720.
@anthonystrohmayer9191
@anthonystrohmayer9191 3 года назад
I have decided to like your video. It's half doable for me. I can melt my pcie slots for my video cards but I'm not going to solder power wires to my power supplies. I can just as easily use an external power supply and route the power wires out the back of the server. It appears that mining power supplies power up independently of the motherboard because they have an on/off switch on them.
@PeteDiful
@PeteDiful 3 года назад
Mining will melt PCIe slots along with everything plastic :)
@TransRightsMatter
@TransRightsMatter 3 года назад
I want to do the same for VR and gaming. Since this is a hack and this video is 1 year 9 months and 1 week years old, did you notice any performance issues from running the card at 8x vs 16x during gaming?
@PeteDiful
@PeteDiful 3 года назад
Thanks for comment, Alicia! Not really a hack because GPUs, CPUs, server motherboard are fully compatible with & compliant to same PCIe standards. It's a matter of understanding standards & technology and taking advantage of available platforms. It's common practice for OEMs to exercise "artistic license" toward the physical shape of their products to exclude competition and promote "upgrades". This is the case with my servers. I could pay a few hundred/thousand $$$ more to get this "hack" upgrade from Dell. PCIe x8 has half the data rate of PCIe x16 no matter which platform or application is run. As for the GPUs in my servers, they're running fine ever since with no performance issues nor degradation except that I'm getting half performance of PCIe x16. These servers are purpose built for clients and suit just fine - running mostly video/audio editing tools, sorry no games :( Servers have been running 24/7 since with occasional power downs during California's brown-outs :) If you want full performance of PCIe x16 GPU, it's best to have MOTHERBOARD that fully supports GPU's PCIe performance level. If the program/game you're running does most of its computing (i.e., >98%) in the GPU, then any CPU that is compatible with the motherboard will suit just fine.
@SpookyDOTexe
@SpookyDOTexe 4 года назад
I’m trying to mod my T610 to take X16 gpus. I don’t have the correct tools however. Would a tiny dremmel work if i can borrow one somewhere? Or do i need a soldering iron as u demonstrated in the video?
@PeteDiful
@PeteDiful 4 года назад
Thanks for watching and commenting, SpookyDOTexe! Awesome T610! I'm jealous. It's your choice of whether to Dremmel or melt or route. My case is to use whatever I have available to me and most comfortable in using. I have all three methods available and use my router as seen in my other video. But if I didn't have a router but the other 2, my preference is to use a solder iron because it's forgiving and typically has better access than the Dremmel. Note that your X16 GPU will work in T610 but will be limited by T610 X8 bus.
@SpookyDOTexe
@SpookyDOTexe 4 года назад
@@PeteDiful ik ´they won't lose that much performance tho, max 5-10% at worst.
@PeteDiful
@PeteDiful 4 года назад
@@SpookyDOTexe Yeah. All builds should start with purpose in mind. If X16 in X8 satisfies the purpose in mind, then go for it!
@SpookyDOTexe
@SpookyDOTexe 4 года назад
PeteDiful it’s gonna be a gaming-capable gameserver machine.
@SpookyDOTexe
@SpookyDOTexe 4 года назад
@@PeteDiful i did it! i now have a GTX 650Ti Boost in mine ;D I modded the slot in the stupidest way ever. KNIFE & TWEEZERS AHAHAHAH but i was really careful. It worked out!
@shaunwhiteley3544
@shaunwhiteley3544 5 лет назад
Don’t suppose you have any tips on reducing fan noise? I don’t have a heavy load on my R710 so I was thinking maybe reduce fan speed? Any suggestions please? Cheers
@PeteDiful
@PeteDiful 4 года назад
Absolutely. For time being, if it's a sitting server and not in rack, then prop it on top of dense foam and add noise dampening material at rear & front, careful not to obstruction airflow. Fortunately, the R710 is reasonably quiet and has excellent cooling fans to begin with. Aside from swapping out to better performing fans, I'm not a fan of reducing fan speed unnecessarily. Doing so may create more heat (e.g., adding inline resisters to reduce fan voltage will heat up resisters instead) and accelerate failure of electronics that need the cooling. Better sound proofing enclosures may work, but I believe this may not work for all scenarios and can be quite expensive, defeating our original goal of keeping costs low. There isn't much that we can do to adjust the efficiency of cooling electronics to reduce fan noise. For now, I'll appreciate the fans for the benefits they provide and rather try to find a "simple" and affordable solution to complement them. I'm working on
@PeteDiful
@PeteDiful 4 года назад
@Shaun: All my circuit parts have now arrived today! I've ordered extra parts given the ~2wks ordering time in case I need them. Total cost was under $4 & about the size of small postage stamp except for speaker, which is about 2" diameter. Hope to have some time this weekend to assemble & test. If successful at noticeably reducing server sound, will give you my extra circuit just because you were nice enough to question. Pretty excited to have these affordable beasty servers running my NAS & VMs at such low power. Server is already quiet in my opinion. But adding "silencers" will make these beasts so unreal w/o sacrificing cooling performance. I'll more likely add these silencers to my other noisy equipment.
@shaunwhiteley3544
@shaunwhiteley3544 4 года назад
PeteDiful Hi, thanks for the replies, sorry I missed the first one. I have sort of got around the problem. I had to move it into my garage as I could hear the server running at night from downstairs. She who must be obeyed was not impressed! 😣. We then had some very hot weather in 🇬🇧. Server was then running even louder with the heat. It’s now in my pit in the garage, so a bit cooler but still noisy. At least I don’t hear it in the house now. It’s on 24/7 with unraid , Home Assistant and 7 Poe cameras, Zoneminder and a few vm’s. I love it but it’s pulling about 250 plus watts. Expensive to run so I may have to look at other solutions 😢. Cheers Please keep us/me 😀 updated with your experiments 😀
@jameshardy2039
@jameshardy2039 4 года назад
@@shaunwhiteley3544 I had this issue with my r710. When I first got it it was very quiet then after multiple OS reinstalls the fans started running a lot faster. After fretting about this for several days I read a post and fixed it by doing whats called a Flea Power Drain. Power server off, unplug both PSU's from wall power, then hold down the front power on button for 35 seconds. Fixed! If this does not work then try updating idrac and bios firmware. Cheers.
@CriticoolHit
@CriticoolHit 3 года назад
@@PeteDiful I think the real issue here is how the idrac doesn't recognize the cards and sets fan speed to 100, which is technically so loud its above safe DB. I am wondering why yours didn't do this. did you send those direct codes to your idrac forcing the fan speed? or disabling that detection feature? This is currently what's keeping me from doing this.
@drkcodeman
@drkcodeman 4 года назад
Was there any bios settings you need to change to allow a dedicated card to work?
@PeteDiful
@PeteDiful 4 года назад
No specific BIOS change to HW/PCIe. Only BIOS change was from UEFI boot mode to BIOS boot mode so that I can selectively boot from USB with unRAID OS. I followed SpaceInvader One's video tutorials on setting up PCIe passthrough to VMs so that VM can get dedicated control of HW. Then I remote VCN to my VM (running Win10) and install appropriate nVIDIA driver. Reboot my VM (not server) and remote VCN to see beautiful HD graphics remotely! I haven't played any video games on server yet to be able to report on gaming experience. I do video editing and do see better video rendering performance than on my quad core workstation of comparable CPU speed. I think theoretically on pristine VM with dedicated GPU & Xeon core & high performance server MB, you should get full attention of HW. Only performance penalty that I see is remote network connection & VCN graphics syncing between local & remote server. On general purpose desktop/laptop, you don't have additional layer of remote control, but I would think that overall performance is penalized due to CPU having to deal with OS bloat, background services, etc.
@Appri
@Appri 2 года назад
Hi Pete, there's like less than a 3% chance you are still monitoring comments on this video, but I'm just getting back to even more r710 modding - going to install onboard SSDs, and get back to installing a gpu :)
@comm2005
@comm2005 3 года назад
Pete.. Thank you very much. Can this be done on an R910 ? -
@PeteDiful
@PeteDiful 3 года назад
Sorry for delayed response. Google moved my RU-vid feedback comments to "Social" mailbox, which I never check; and I've been busy with other projects (69GByte network storage transfer to SSD!) since to be checking on this hobby thread. You probably figured things out by now :( R910 - nice. Absolutely! Similar process on notching if need be & GPU power tapping.
@drkcodeman
@drkcodeman 4 года назад
I've installed the card i installed the drivers through idrac and according to idrac its functioning but for some reason its not carrying over a display to my monitor.
@PeteDiful
@PeteDiful 4 года назад
That's good news! CPU recognizes PCIe GPU. Assuming signalling to PCIe GPU data pins are good, it looks like it's a matter of telling OS to take ownership of GPU. You may need to use iDRAC or built-in VGA to do all your driver installs on low resolution. I know that nVidia driver installs take quite a bit of time (~30minutes) with some long moments (~10minutes) the install looks like it's hanging but really is downloading & unpacking huge install packages. Be patient. If install successful, then we know for sure that OS took ownership of PCIe GPU. After successful driver install, reboot with monitor connected to PCIe GPU. Good luck!
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful driver is installed still nothing like I said I'm going to try another cable
@343ian
@343ian 4 года назад
@@PeteDiful i'm having some trouble with the driver for the k2200. i got two of them and put them both in the x8 slots, but on rebooting, the computer gets stuck in startup
@iankoratsky1181
@iankoratsky1181 4 года назад
If i get the x16 riser card, can i put the x8 slots riser card where the x4 slot card used to be? or is the x4 the only one that can go there?
@iankoratsky1181
@iankoratsky1181 4 года назад
Also, could i just plug in 3x single slot cards? Do i have to use a double slot on the 8x?
@PeteDiful
@PeteDiful 3 года назад
Sorry for delayed response. Google moved my RU-vid feedback comments to "Social" mailbox, which I never check; and I've been busy with other projects (69GByte network storage transfer to SSD!) since to be checking on this hobby thread. You probably figured things out by now :( You may use x16 riser card in the x8 slot ("Riser 2" slot). The x8 riser card may be used in the x4 slot ("Riser 1" slot) but will be under-utilized with just 2 x4 slots. The x4 riser card has bifurcation to utilize 3 x4 slots - basically you get one more x4 slot than you would get from the x8 riser card.
@PeteDiful
@PeteDiful 3 года назад
Yes, you may plug in 3x single slot cards. The x4 "Riser 1" has 3 x4 slots. The x8 "Riser 2" has 2 x8 slots. You have a total of 5 PCIe slots that you should try to maximize their total bandwidth however you choose. For example, I have this configuration on several servers: "Riser 1" = {Quadro K2000 , 1TB M.2 NVMe SSD (HUMUNGOUS $L4), HBA controller} ; "Riser 2" = {Quadro K2000, Quadro K2000}.
@TheJPomp
@TheJPomp Год назад
Came across this looking for GPUs for my R710. Neat project. Is it still running in this config?
@PeteDiful
@PeteDiful 5 месяцев назад
Yep, still running these beasty servers! One server still takes care of my NextCloud server, 2 VMs for video processing, and multiple web hosting sites. The farm has grown and donated same set up (with spade power connectors instead of soldering) to SpaceInvader One back in 2020. No complaints yet.
@drkcodeman
@drkcodeman 4 года назад
1st they sell x16 riser cards 2nd how does this bypass the 25watt tdp that the express slot can handle? 3rd: you can avoid all soldering by buying a cheap gold HP server PSU and a breakout board.
@PeteDiful
@PeteDiful 4 года назад
Thanks for feedback, Cody! Yes, Dell has x16 Riser card Part #GP347. I have these. It only has 1 PCIe x16 slot & at top position, which will not have room for a dual slot GPU but will take just 1 single slot GPU. These riser cards are also relatively expensive, maybe $40 for a good deal. I find notching the x8 sockets free, easy and doesn't lose anything on GPU performance. 25W TDP is overcome by internally tapping PSU as shown in video. Thanks for gold HP server PSU tip. Wouldn't that require it to be an external power supply that needs to be switched on each time main server with GPU is turned on? ...& switched off when server not used?
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful Working on this right now: i.imgur.com/sW5SUtp.jpg I already replaced the crappy raid controller for h200 and im adding ssd's I just need to find a gpu that will work i am debating on getting the 16x slot and modifying the case to make it work. All of this with a hd 6950 and i no video through hdmi. looking into other graphics cards.
@drkcodeman
@drkcodeman 4 года назад
Think i will do a high end single card quadro for the top slot on x16
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful Dropped ~ $100 for the x16 slot (i think i broke the x8 riser card from soldering) and 6 hard drive sleds for ssd's.
@PeteDiful
@PeteDiful 4 года назад
@Cody: Thanks for the image share. Nice grab on deal! You're gonna have a lot of fun! I've done quite a bit of research into high performance, low power GPUs. I'd suggests getting a Quadro K2200 (note the 'K' in front of "2200") for best bang for buck without need to modify case or PSU. Should be plug & play for your setup. For even more performance, get a Quadro K4200, but you will need to tap PCIe GPU power as shown in video. Both GPUs are single slot & will work nicely as I have these setups too. Read other reply threads on how you can do PCIe GPU power tapping without soldering. Good luck!
@andrewperry1987
@andrewperry1987 4 года назад
Do you have a link to the solder-less blade contacts that you used?
@PeteDiful
@PeteDiful 4 года назад
Thanks for watching and commenting, AP! Please check out my replies to "Antonius" or "Thomas Tomchak" about solderless connections - it's pretty easy. But, man, I've been swamped lately with work and really wish I can get a video out on how to do this just to inspire others like you. In the meantime, probably can put a link up to some pix. Good luck!
@madmax43v3r
@madmax43v3r 4 года назад
Can you use the x16 slots on the MoBo directly? Using a x16 riser cable?
@PeteDiful
@PeteDiful 4 года назад
I'm not sure, Hugo. It seems like MoBo x16 slot (for Riser2) can be used directly (like with x16 riser cable) as Riser2 card is just a simple card with PCIe socket and no components. I would venture to say that the Riser2 PCIe socket is directly mapped 1-to-1 to MoBo x16 slot. So a x16 riser cable should be the same 1-to-1 mapping to MoBo x16 slot. Keep in mind that R710 MoBo PCIe slot 2 is wired x8 only, so you won't get any x16 functionality. MoBo PCIe slot 1 is a little special because it seems to be bifurcated 3x4 with 1st bifurcation dedicated to a Dell x4 storage controller card. So, no for MoBo slot 1.
@downthegardenpath
@downthegardenpath 4 года назад
@@PeteDiful The R710 won't boot without the riser in place unfortunately
@aledantas2k10
@aledantas2k10 4 года назад
I thought you would have issues running 1 workstation and 1 gaming GFX card from the same brand. since both use different drivers...
@PeteDiful
@PeteDiful 4 года назад
Implementation is on virtualized hardware & software. Fancy words but this is pretty much what all modern computing is all about. Butch of HW and SW and one master hypervisor (unRAID OS) that orchestrates everything to run concurrently as if each virtual machine is running on its own HW, which it pretty much is except for shared resources like storage, network, maybe some IO peripherals, and the physical server enclosure/case. Lots of fun!
@nettyvoyager6336
@nettyvoyager6336 4 года назад
ive seen some real nice R710's my m8 keeps saying i need an R70 then he has also told me that an r720 is better than an R810 is this correct
@PeteDiful
@PeteDiful 4 года назад
"Better" depends on what metrics you're comparing. Assuming potential compute capabilities, then, R810 (2or4x8core or 64core equivalent, DDR3-1333, PCIe 2.0) is better than R720 (2x10core or 40core equivalent, DDR3-1600, PCIe 3.0) which is better than R710 (2x6core or 24core equivalent, DDR3-1333, PCIe 2.0). But then R720 betters both R710/R810 with faster DDR-1666 and PCIe 3.0. Are you compute intensive, memory intensive or PCIe/peripheral intensive? Or combination of these? Choose which best suits your needs. Depending on purpose/need of server, one may be able to find best configuration on a particular platform. I'm running remote 2x high-end video editing workstations, cloud storage, 2x Windows office VMs, 4 lite dynamic website hostings, various Linux micro services on a single R710 platform. Performance is good enough and cost efficient compared to 4 or 5 basic i7 computers lying around. I can get more video compute performance on a R720 due to PCIe 3.0 GPUs vs 2.0 & 4GB/s NVMe transfers vs 2GB/s for about $2000 more. If you don't need the GPUs, which is outside of this discussion by the way, then R810 packs the pure CPU compute power of potentially 64cores but bottlenecked at DDR3 mem access - good for everything except for video editing or heavy IO traffic. As long as each of the 64 CPUs can do its job within the CPU subsystem (mostly stay off of main bus) then R810 is a monster computer. Hope that helps.
@anathma
@anathma 4 года назад
@@PeteDiful One thing to keep in mind is the maxed out R720 will be actually close to the same CPU speeds as the maxed out R810 because it is the newer gens of CPU. And if you pay for power at ALL, the R810 is going to cost you a crap ton more. The 720 also has cables and shit you can buy for the GPU and actually supports them so you dont need to solder etc.
@PeteDiful
@PeteDiful 4 года назад
@@anathma Absolutely, anethema. I'd like to also emphasize that everything should be purpose-built. As the title of this video indicates, focus is on purposeful performance under $900. The $900 price tag is actually the high end range for the build parts as I would rather have people find realistic market prices for their components instead of come up empty handed when shopping for ridiculously low phantom prices; though, it is very possible to build my build for under $600 if shop hard enough. I suppose if one's purpose is to build the fastest R720 server with sufficient memory & IO support to fully utilize the compute resources, then one would need to have budget of maybe $10k? Of course, such a system would likely be cheaper on power than 10x R710 :) Maybe? My conclusion is that the R710 build platform has the best price/performance/ease of ALL other current servers when it comes to general purpose applications as mentioned in this video. R710 & 2x Xeon 12core combinations still pack a wallop for most compute intensive applications these days.
@graydenfisher5704
@graydenfisher5704 2 года назад
Hello thanks for the info. I just picked up one of these r710's , my first physical server not in a farm far away. I have a Tesla k40m I would like to do this mod with, but I'm concerned that the bios needs above 4g decoding available and that option isn't available anywhere I've seen in the many many config areas of this server weather bios, system services, idrac ..... Anybody know if a card that normally requires "above 4g decoding" could work with this sort of mod? my other does happen to be a 970, but I really want to get my tesla working in this server if any possibility exists as I already got headless windows parsec working with it on a different setup (see craft computing's video any headless gpu proxmox) .
@TrevorHaagsma
@TrevorHaagsma Месяц назад
Did you get it working?
@graydini
@graydini Месяц назад
@@TrevorHaagsma the k40m doesn’t work because of the required 4g decoding. I got a 1070 8gb that worked. I just used an external power supply. There was a mod to flash the Tesla bios to change it to a k40c instead of an k40m because that’s the consumer version that does t require 4g but I bricked my card trying it. But now those things are dirt cheap so may be worth a try again if you can find the forum post. Also requires a lot of physical modification and cutting cages and stuff like that.
@sathishiyer5371
@sathishiyer5371 5 месяцев назад
What virtualization software are you using? VMWARE ESXI 6.7 Thanks. Cheers. Great MOD. Love it.
@PeteDiful
@PeteDiful 5 месяцев назад
unRAID OS has built-in virtualization hypervisor that uses KVM. If you wish, you can run ESXI OS instead of unRAID on this set up. The R710 set up that I'm using supports Intel VT-x and IOMMU that are necessary hardware architectures to virtualize hardware. Here are considerations and requirements to run VMs. docs.unraid.net/unraid-os/manual/vm-management/#:~:text=To%20create%20virtual%20machines%20on,d%20or%20AMD%2DVi).
@birdmam87
@birdmam87 4 месяца назад
@@PeteDiful 0:31 0:31
@birdmam87
@birdmam87 4 месяца назад
Because
@Silastar31
@Silastar31 5 лет назад
Hello! Great Tutorial video. I will probably do that on my R710 soon! Just to be on the safe side: I'm afaid of "Soldering" anything to the Powerconnector, so your recent development is also interesting to me. What do you mean by Blade Connectors?
@PeteDiful
@PeteDiful 5 лет назад
Thanks, Arnaud! Solder-less PCIe GPU cable connection video coming soon. Stay tuned...
@PeteDiful
@PeteDiful 4 года назад
Hi Arnaud. Not sure if you have done the solderless connections yet. See "Thomas Tomchak" comment thread about method & supply list including blade (spade) connectors. Sorry I've been super tied up for past months on emergencies and haven't been able to put together other technical videos as I had hoped. :( Good luck!
@thehen101
@thehen101 4 года назад
@@PeteDiful Any chance that perhaps you could simply take a picture of the connections? I saw the links in the other comment, but I'm not sure how I should plug/attach them when they arrive. Thanks for the great video, too.
@Appri
@Appri 4 года назад
Hey Pete! I'm a fellow Poweredge r710 modder and I need some help. My goal is to get this server to VR Ready status. Currently, I am running dual Xeon x5690s, along with a Geforce GTX 1050( a quick mod without power) and a 4 port usb 3.0 pcie card. The rift is detected and setup works perfectly, except the oculus itself doesn't turn on. Any ideas? I could use some help. The oculus displays audio, but 0 display, and orange status light.
@Appri
@Appri 4 года назад
Also, the 970 takes up two slots? I thought it only took up one.
@peted7312
@peted7312 4 года назад
Cool beast, Mayhem! It's going to be a fun and powerful system. I don't know anything about the "rift" nor "oculus". But love to help if I can on the HW tech & virtualization side the best I can. Tell me a little about the "rift" & "oculus" HW connection interface. What USB card using? The GTX 1050 is double slot and does need extra PCIe GPU power to function properly. So keep it happy with loving power. You'll need to mate its FW version with SW you're running. Different SW is compiled with different FW development versions.
@Appri
@Appri 4 года назад
@@peted7312 IT WORKED. FOR A WHILE. I bought a new 970 to pair with the 1050, and the hdmi had it inside, along wit hthe 4 port usb 3.0 card. I'm using double slot cards and pcie risers. I think I fried the 970 however, I accidentally sparked it and it stopped working :(. Luckilly, I have a spare 1660 that I can try, which seems to work. If it does still work, ill go purchase a 2080ti hopefully and see if that works, aha!
@Appri
@Appri 4 года назад
Pete, Do you have an idea for how not to fry a GPU in this way? :( Turns out I ran outta money, haha!
@peted7312
@peted7312 4 года назад
@@Appri Good to hear! Are you running gaming with 2 VMs? Sorry about the 970 getting fried. Hopefully it wasn't a big investment and a good learning experience for future mods. The 2080ti is an ultra high performance GPU that I don't think the R710 stock PCIe (2.0 x8) Risers can fully utilize (max 4GB/s Riser2; 2GB/s Riser1). I'll have to research more about this to see if the R710 system can easily be leveraged to fully utilize PCIe 3.0 x16. Off the top of my head, I believe R710 can be leveraged to fully utilize PCIe 3.0 x8 (max 8GB/s) though, which should be pretty good enough. GOOD LUCK!
@drkcodeman
@drkcodeman 4 года назад
Update I tried a K4000 GPU with external power supply with a 16x riser card and I get no video on the server.
@PeteDiful
@PeteDiful 4 года назад
Start 1st by testing 16x riser with simple GPU to know it should be able to handle other GPUs. 2nd test K4000 on another known good system to know it should work in server. Lastly PCIe power up sequence may be complicated with external power supply. PCIe is very sensitive and finicky. Has external PSU power up sequence with GPU been tested to work?
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful imgur.com/2s7QdcR
@PeteDiful
@PeteDiful 4 года назад
Got it! Looks like your Dell power rails have a "bridge" across the slots. I'd make these solderless connectors and send to you this weekend. Will post pix of these when ready ~11/16/19. I'm booked these days so can't do any sooner.
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful it's already soldered I am going to try something other then dvi on the card
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful and buy a adapter to do a continuity check
@muhammadidhar877
@muhammadidhar877 2 года назад
try rendering using the new version of adobe premiere
@somerandomguy1533
@somerandomguy1533 4 года назад
I really hope no one burned down their house doing this. Please never leave your rig plugged in or leave unattended if you have done this. (In the hood we called this ghetto rigging.) And someone tell this dude to post a disclaimer before someone sue his ass over this...... I need electrical engineer to approve of this before anyone can change my mind that this is safe. (I´ll be looking forward to that comment when it never happens.) Be safe and stop being cheap buy a second PSU..... Or a fire extinguisher if done in this way. P.S. Not trying to crap on your ways of doing things or your video. Just this is really unsafe. Also any of your input you can give me about my comment will be much welcomed and hopefully we can clear some things up.
@downthegardenpath
@downthegardenpath 4 года назад
Yup. I really don't see the point in doing it like this. I have a GTX1660ti in my R710 with an external PSU
@JayG0o
@JayG0o 3 года назад
hi, is the setup suitable for crypto mining ? in terms of heat dissipation
@malikonthesus
@malikonthesus Год назад
😑
@nettyvoyager6336
@nettyvoyager6336 4 года назад
welcome to the sound of dyson lmao
@PeteDiful
@PeteDiful 4 года назад
Thanks! :) Much quieter though. Keep good air flow and ventilation and things hum along quietly. Only when the GPUs kick in will I hear the GPU fans kick-a**. In any case, I've built effective active/dynamic noise cancelling circuit for
@Appri
@Appri 4 года назад
Pete. I felt that it was only fair that I update you on the server progress. I BEAT YOUR PROCESSING POWER, MUHAHAHHAHAHHA 2* Xeon x5690, Geforce GTX 1070 & 1050, 48GB RAM, 1740 watt power supply total. 4 USB 3.0 ports, Bluetooth capability, wifi capability of 36MB/s, and most importantly, VR ready. I am using an ASUS dual-fan two slot 1070, which has two HDMI ports, which is perfect, to use as both the display and the VR headset. I am playing beat saber, VR Minecraft, and SuperHot on it currently. Other than that, this server is being used as a remote control desktop, so that I can train AI on it overnight. I am currently using the stock raid system, which will likely change as soon as I manage to finish backing up 6TB of data, at which point I will move to a 12TB hard drive and a 500GB SSD. The server was a wild project, and I managed to finish it in fully working condition. My soldering job is really not the best though, and the wires are actually quite sus, which means I handle it with care at every time. Other than that, the server is again fully done, and here's a picture. (The 1070 driver install was literally giving me nightmares, I thought the 1070 wasn't displaying, which I was wrong about. IT was just installing the drivers and chilling ;)) imgur.com/7DaGAll Hope you have an amazing day, you are an amazing man for creating this tutorial. The power supply tapping was what I most needed, and this project has been a blast.
@Appri
@Appri 4 года назад
By the way, you should know that both of these cards are on the top slot, even though both are dual slot cards. It really doesn't matter as long as you can push the pin in/ use the hdmi extender.
@PeteDiful
@PeteDiful 4 года назад
Brother you did it! I saw the pix on the 2 beasty GPUs & monster server! Nice. So glad you got it resolved - no need to send spare GTX 970 now :) I'll one up you with 2x GTX 970 now. Yeah, those GPU drivers are a bit tricky to get right. Always have to align whatever SW apps with correct driver version and go through lengthy driver install/testing (~30min). If you're up for another fun challenge and haven't done so yet, look into unRAID for your next "RAID" gig. Just post back if you need some pointers to make transition smooth.
@Appri
@Appri 4 года назад
@@PeteDiful Ehhhh Why unRAID though? Doesn't it work just fine using normal stock RAID?
@PeteDiful
@PeteDiful 4 года назад
@@Appri Depends what stock RAID you have. "6i" card is 100% HW RAID, where OS will see storage as just 1 storage unit - OS is hands off on drives. Key benefit is performance through parallel data access. "H200" & "H310" cards are HW RAID too but can be flashed as HBA (IT/IR) mode, where OS sees each drive as an independent unit. If you go with H200 or H310, then you can leverage super thin unRAID (~200MB) to provide both Linux OS & ultra efficient RAID. There's good when unRAID has direct control of each drive. Pros: unRAID efficient, performance storage caching (like to NVMe 1.3GB/s immediately beats HW RAID performance) & data protection redundancy, significantly more useful protected storage than HW RAID, 2x Window VMs to run gaming server (2 games at same time), SW RAID doesn't break like HW RAID (HW RAID failures are painful, complex, expensive), super easy drive data recovery because data not striped like HW RAID, leverage every single CPU the way you like, much more. Cons: good working knowledge of computing, computing I/O & Linux to leverage unRAID capabilities ("SpaceInvader One" & unRAID community can help), last seconds/minutes of I/O write may be lost in event of failure (unlike 6i which has battery to save before failure). For example: With 12 CPUs, I'm doing all this with unRAID simultaneously, while with data protection: 1) Free NextCloud unlimited personal cloud storage & sync - Dropbox, FTP gone. 2) 2x Windows VMs for graphics intensive stuff (each VM allocated 4CPUs & 1 GPU). 3) 2x spare Windows/OSX VMs for light weight office work (each VM allocated 2CPUs & 0 GPU). 4) 6x hosted websites (can do more, but that's all I need at the moment). 5) minor background VMs like back-ups of databases. Server has no need for monitor connection as unRAID UI is via web browser & VMs via remote desktop.
@PeteDiful
@PeteDiful 4 года назад
Of course for your VR gear, you can still directly connect to GPUs & USBs. My current effective take away with unRAID gig is: 1) >16TB protected storage w/3x8TB WD Red. Can increase capacity of any size by simply adding another drive up to the limit of server slots & whatever external extensions. HW RAID has drive limits & type restrictions. 2) I/O read/writes at 1.3GB/s w/1x1TB Intel M.2 NVMe SSD. (2.0GB/s is limit for NVMe on PCIe2.0).
@drkcodeman
@drkcodeman 4 года назад
Do you have the link for the firmware update on this?
@PeteDiful
@PeteDiful 4 года назад
Hey, Cody! Sorry been busy as usual and haven't had time to mess with this hobby. What firmware are you referring to? Dell server FW? GPU's? Should all be available from OEM. For Nvidia GPUs, I first decide which application I'll be running and match that with the FW version that is compatible with application. I go to OEM's website and download that specific FW version and install. Install usually takes about 30minutes with a VERY long time where computer seems to hang for like 30minutes. Just leave FW installation alone to do its thing. Be patient, follow reboot instructions accordingly and things will work. The key with GPU FW is to make sure it's compatible with all the applications you're using. Not all applications have latest/compatible GPU development code to work with latest GPU FW. Keep that in mind.
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful I got it working now :) I had to turn off onboard video in bios and use uefi boot for the GPU
@PeteDiful
@PeteDiful 4 года назад
Ah. Nice you got it running! Seems like your build is on different Server/MB? On Dell R710, don't need to disable onboard Matrox GPU nor change to UEFI Boot. Different strokes for different boats I suppose.
@flowtime8673
@flowtime8673 3 года назад
can i ask where can i find that kind of pc?
@PeteDiful
@PeteDiful 3 года назад
eBay?
@TheRobMozza
@TheRobMozza 2 года назад
Impressive but architecture holds it back, CPU usage is a bottleneck and power draw would be a lot larger than a newer machine unfortunately
@MxODraconis
@MxODraconis 4 года назад
Wow, you should use a 'Dremel' rotary tool to cut out the slots over melting it via a soldering iron!!
@PeteDiful
@PeteDiful 4 года назад
Thanks for suggestion. I've tried with Dremel tool before. It's unforgiving compared to soldering iron. One mistake with the Dremel tool can easily spell disaster. Setup and tooling with Dremel is also more expensive than soldering method. I prefer quality, affordable, reasonably do-able modifications that most people can handle comfortably.
@MxODraconis
@MxODraconis 4 года назад
@@PeteDiful of course, each their own. But a melting will always be more of a risk and not precise or controlled. With unknown heat impact to parts and melting patterns. With a cutting wheel of a rotary tool, you can precisely cut the exact gap needed.
@seaham3d695
@seaham3d695 2 года назад
On day this video will have 710 likes.
@drkcodeman
@drkcodeman 4 года назад
last resort doing the psu mod if it works getting a bigger psu and adding a k4200 for 2 gamers 1 rig on unraid
@PeteDiful
@PeteDiful 4 года назад
Can you send close up pix of the back side power rails (where I did my soldering) of your R710 server? Dell has 2 types of PSU rails. If I know which type you have, I'll mail you a correct set of my solderless contacts if you wish.
@drkcodeman
@drkcodeman 4 года назад
imgur.com/2s7QdcR
@drkcodeman
@drkcodeman 4 года назад
@@PeteDiful I can solder it's fine i mean there is only positive and negative and the voltage is the same on all the possitive cables correct?
@drkcodeman
@drkcodeman 4 года назад
All of that work still no success is 575 watt psu enough for the server and the card?
@PeteDiful
@PeteDiful 4 года назад
Yes, just ground & +12V connections. (See your other comment about solderless contacts. Will have ready ~11/16/19)
@alexalfonsomunt
@alexalfonsomunt 4 года назад
Hello, excellent videos, I have a question, is there a possibility to connect to the pci directly?, My question is because of my rx480 it gives poor performance when connecting to this r710, it seems that the bandwidth greatly affects performance ;c
@PeteDiful
@PeteDiful 3 года назад
Sorry for delayed response. Google moved my RU-vid feedback comments to "Social" mailbox, which I never check; and I've been busy with other projects (69GByte network storage transfer to SSD!) since to be checking on this hobby thread. You probably figured things out by now :( You may connect the RX480 directly to R710. However, the R710 "Riser 2" is PCIe 2.0 x8 (max ~4GB/s), whereas the RX480 is PCIe 3.0 x16 (max ~16GB/s); basically the R710 architecture only provides 1/4 the PCIe throughput that the RX480 potentially can do.
Далее
#engineering #diy #amazing #electronic #fyp
0:59
Просмотров 2,3 млн
Battery  low 🔋 🪫
0:10
Просмотров 13 млн
Лучший браузер!
0:27
Просмотров 200 тыс.