As an accessibility professional, this history of screen readers and AAC was fascinating! You can still use that voice on the latest version of JAWS today. Thank you so much for making this!
you mentioned dynavox once in the video, my mother used the tobii dynavox with eye motion since she lost the ability to speak clearly and was unable to control a computer like she used too. paired with alexa, we could here it telling alexa what to do. dynavox: alexa change channel to 164 alexa: changing to channel 164 on fios *tv channel was changed*
I enjoyed this. I never had the PC card but the DecTalk express synthesizer. I'm a blind user and relied on that for my computing needs - funny how I've gone entirely iOS these days and braille in text rather than typing but there it is. Thanks for the video.
This is so under appreciated. This device was the predecessor of what we now know was JAWS and VoiceOver, and was the only option for many people until JAWS for DOS came out.
Neither did I until recently, and I had no clue this early version was freeware. I started using JAWS in demo mode as a kid in the late 90s and have seen it grow a lot, but wow! It's all pretty amazing!.
Sound Thinking You said it! I had no real idea what goes into web access until I worked in my university's office of disability services the last two semesters. As a JAWS user, I was never on the other side of access. I need things accessible but never thought what went into making that happen. This was a new realm for those I worked with to explore. We were learning from the ground up about web access. However, we tried our best to do accessibility reviews for school web pages. The absolute amount of minutia needed to be complient through things like the WCAG 2.0/2.1 levels A at bare minimum, AA preferably, but AAA ideally astounds me. It took months for us to learn how to even properly review our web pages. We eventually found a template that worked for us and reviewed several pages using WCAG 2.1 AA guidelines. I can't imagine how difficult it might be to create those pages. Anyway, my point is I have massive respect for web and app devs, especially those who can make their pages easily accessible to screen reader users and others. Thank you for your hard work, Hanna.
I love how old computers sound like someone talking while they are breathing in, if you don't know what I'm talking about... talk while you are breathing in... it's hysterical. Be careful it can make you a little light headed.
4:35 Seen that thing before in a group home for disabled children in the mid-late 2000's, it was called a "Springboard" and the non-verbal children would use it to communicate, before iPads became available enough for group homes to adopt those. I remember one of the children at the group home had a compulsion to say he wanted whatever he decided he was going to eat off his plate before eating, so it'd go like this: [has special juice with vitamins and stuff because he was a very picky eater] [taps button] I [taps another button] WANT [taps another button] DRINK [drinks juice] I imitated the voice a few times and once jokingly hit "Apple" over and over again. APPLE. APPLE. APPLE. APPLE. The non-verbal user of the Springboard was amused by me screwing with the device and started repeatedly hitting APPLE to show his amusement.
Those things were just as expensive, if not even more so, than the DECtalk itself. The assistive device industry is so insular that the manufacturers could basically set whatever price they want for these and people will pay it- pretty much the sky is the limit.
The whole speech synthesis is really cool, and the DECtalk singing reminds me of the really early VOCALOID software created by Yamaha from the early 2000's or a lot of the UTAUloid voices (freeware VOCALOID) (yes we all expected that some weeb would mention Vocaloids)
@@rastas_4221 Americans don't know any accents. Any American that says they can do "a good Australian accent" either sounds British or like a drunk New Zealander.
The history of accessibility hardware and software is so fascinating. Imagine what a huge difference this technology made in so many people's lives. In the '90s I had a co-worker who was 100% visually impaired and with the help of technology like this (not sure if it was exactly this) she was the hardest-working, most essential person in our entire office.
My first employer used DEC x86 -based PCs running Windows NT Workstation 4.0. Now while NT 4.0 was meh, those PCs were some of the more pleasant to service/work on. Was saddened when they were bought out by Compaq...
DECTalk is pretty awesome... it was used as the TTS engine in Moonbase Alpha,, and became the root of many great memes... "I'm gonna eat a pizza* *dials number* *Hello, can i order a pizza?* *no* *Why?* *'Cause you are John Madden*
I would love someone to code Doom with this so every-time you shoot it says Pew! Pew! Pew! Sure it would be easier to just play a wav, but not nearly as fun.
Voice turned out to be easy... speech recognition turned out to the hard nut to crack. But in the beginning researchers had it backwards... they thought speech would be difficult to solve.
Depends what your criteria is I guess. I mean though modern speech synthesizers are certainly understandable there's still a lot of work to be done to make them sound truly human. Speech recognition has also come a long way. It's not 100% accurate of course but then neither are human transcribers...
Lots of early AI researchers made this kind of mistake. It was also expected that walking and vision would be easy, but things like advanced symbolic algebra and calculus would be hard (despite complex linear algebra on machines being a solved problem before WWII, look it up, it helped break the ENIAC patent on electronic computers). That was all backwards. Algebra and calculus work on precisely codified rules, which are relatively easy to convert to machine instructions. Walking, talking, seeing, and hearing depend on incredibly complex neurological systems that we still only have a vague understanding of. We basically had to solve all those problems by hand without a lot of guidance from how nature does it. This is why it's taken about 50 years to get autonomous vehicles even close to being ready for general use.
Yep. Its one of those thing that because it comes so easy to us that we assumed it be easy for a machine.. You don't struggle to understand someone with a slightly different accent to you. So why would a machine? But it just went to show us how little we understood how things work, and how complex they truly are.
OMG, this is definitely one of my favorite videos of yours! I use JAWS and its Apple counterpart, VoiceOver, daily. I've been using JAWS since the late 90s, so I never knew how JAWS worked in the early days with hardware voice synthesizers. They still use these voices you covered in things like the talking version of the TI83 graphing calculator. My ears are thankful that synthesized speech has improved dramatically since then, though those old voices are still so much fun. It's amazing now I can just go download so many voices for JAWS off their website, yet people 25+ years ago had to get a whole expensive sound card for JAWS. BTW, JAWS itself is still crazy expensive, costing around $1000 with roughly $240 paid every couple years to keep the software maintinance agreement active. If I hadn't been using JAWS for 20 years I'd probably use a free and very good alternative called NVDA. WHEN I was a kid, I didn't own a full version of JAWS, so had to download the 40 minute demo mode version of JAWS, I think I did this starting with version 4 and ending with version 9.0 when I got the state to pay for it for college,. I downloaded the demos off their website using good ol' America Online!!! No comment, we had AOL until 2008 and no broadband until like 2016. Anyway, man, that thing took all night to download, sometimes never finishing, whereas now I can download a JAWS update in less than a minute... That 40 minute mode was crazy horrible, I'd have to restart my PC literally every 40 minutes. They still use that model for newer JAWS demos today if you're curious how old JAWS 2.3 compares to new JAWS 2018, I think they're calling it. They stopped the numbered releases with JAWS 18 back in like 2017 because they were bought by or merged with another company who chaned the nameing system for JAWS releases. Sorry for my rant, but I loved this video. It's one of my favorites from you, aside from the Sims pack reviews you do which strongly influance my purchases. I've so rarely seen classic speech synthesis covered, even rarer for it to be linked to early JAWS, rarer still for this all to come from a sighted guy. If I'd not been subscribed to you for years already I'd subscribe to you in an instant! You put out some quality videos!
There is no incentive to bring the price down considering the narrow target audience- these companies typically have only a few thousand customers at the most and the cost of developing the software is very high. It's the reason why eye trackers still can cost as much as a compact car. Even at that price they are probably not making much profit and they often depend on product support fees. Basically these people can name a price and people will pay it because there is so little competition.
Half Life 1 used a hand-programmed speech system with prerecorded phrases and programmed sentences based on environmental variables. Not even close to TTS.
I'm 70 yrs. old so we had to make our own speech and music synthesizers when we were kids. Its good to see younger people interested in older hardware, not because I'm nostalgic but because you are understanding how things actually work.
CLINT! You`re so soothing to watch and listen to. Of course the topics are close to the heart also, as growing up with C64, Amiga 500 and old PC-games. :D Really appreciate the effort, please keep on going, sir!
Actually Prof Hawking continued using his speech synth because it was based on the voice of the man who made it for him which ended up becoming one of his best friends. The man passed away and Hawking refused upgrades not because it became iconic with him, but because he wanted to honor his friend's memory.
I do remember hearing about preserving Steven Hawking's speech synthesizer, then needed an emulation of a certain obscure CPU. And they needed help a SNES-turned-multi-console emulator called higan. The author that glad his effort to more accurately emulate whichever game it was that used whatever "enhancement chip" which had that CPU embedded inside, could turn into something more significant.
I noticed that outpost 2 box you have in your background and I really wish you'd do a video some time on that game. I was never good enough to beat it but it does have a special place in my heart. and it'd be extra fun to see you do a vid on it since you bashed outpost 1 to pieces in a video back in 2010.
"A cross between a Swede, and an Indian" ... "A drunken Swede." "A Swede with a speech impediment" ...all the Descriptions of the Voice are just Describing a Dane....
Watching this in January 2020, and I just noticed Outpost 2 on the lower right (my right) shelf behind you. I loved that game as a kid, and the included novelisation of the game's story was probably what got me into reading books in the first place. Here's hoping for a video on Outpost 2 some day. :)
been watching you since when you were in the 100s of subs, just want to let you know you're doing great. As a tech guy, even in career, i want to let you know I love every video you have put out and has made my life a little easier knowing there's guys like us interested in old tech like myself
I've been watching your videos for a number of years. Talking of voices, I've just realised I love the sound of your voice. You should read bedtime stories
Modern versions of this exist, though the software on the chip is not 4.2.C. Look at the DECTALK USB from Access Solutions. I've got one and it's pretty cool. Have been a big DEC fan for a long time and it was extra cool to get something I could use with a modern system. Also, had no idea the Archive had an emulator of this. Great vid.
VWestlife What was great about the Amiga version, it was based on the earlier S.A.M. for C64 and Atari (the latter had that iconic voice used in U96’s Techno version of the “Das Boot” theme), so it had a phoneme mode where you could let it speak other languages, albeit with an American accent. I had great fun with it.
I've been playing videos from your channel for about a week now and I reallye njoy them, especially the Oddware instalments. But this one finally prompted me to comment. You see, I am accessing youtube at the moment using Chrome and jAWS 2018 (already a few years out of date). I never had any Dectalk hardware, but I noticed this synth back in the 80s when I was growing up -- the Mississauga Tranist bus telephone numbers would read out the schedule using the Paul voice. Later on at libraries I saw the big Kurzweil Reading Machines of various kinds -- they were big standalone devices that included scanner, optical character recognition and voice output. IN the late 90s they had a software version, and finally in 1999 I got the programme for Windows 98, which included Dectalk software. The voices, unlike many speech synthesisers of the 1980s, still kind of stand up today as being pretty good! They do have a pretty unique expressive quality.
Does it sound like the voice of CVS Pharmacy, the Social Security Administration and Apple? Or does it sound more computerized? If the former, here's a fun fact: That voice you hear isn't text-to-speech, it's a voice actor named Tom Glynn. If the latter, it IS a text to speech system that was modeled after Tom Glynn, who just *sounds* like a computer... ;)
This is a superb video and well researched as always. I am registered blind myself and I really liked the fact you ran it with JAWS to show people how the screen reader works. Its worth noticing that this speech synth gets commonly confused with the Macintosh system voice, which is used for example on the benny benassi Satisfaction track. There are some Dectalk voices used in some tracks, the only one I know of is Mike Oldfield's Surfing on the Light and Shade album. The TextAssist package from Creative which they shipped with some of their soundcards (with utilities such as TextOle etc) also used the Dectalk voice, for a long while this is what I used to read documents as back then a Dectalk synth plus screen reader (without the cost of the computer) would probably set you back just over $2000. Of course these days we use JAWS for Windows with software speech synthesis, but during the 90's you just couldn't get responsive speech output quickly enough without using dedicated hardware such as this. I actually have a Dectalk Express somewhere myself, this is the same as the Dectalk PC but in an external package with a serial port, the advantage of this was that it worked past Windows NT (the Dectalk PC being an ISA card). You may not know but the software on the Dectalks is actually on flash ROM, and you could upgrade this, but the version you have I believe is the best and the last version done by DEC.
Not to mention, the Windows soundcard limitations. You couldn't have multiple wave sources going to the same output, it would throw up an error or abort sound playback, so a hardware speech synth was the way to go.
Probably no. To the best of my knowledge, Kraftwerk had begun working on the album "Electric Café" in the early eighties (whereas the album was published several years later, in 1986); so I guess it was a different speach synthesis they used for songs like "Music nonstop". The Wikipedia article on Kraftwerk says that a unique speach synthesis setup / machine was built for Kraftwerk's earlier works in the seventies, so their spoken words from that time probably have a different history of origins than Dectalk. As far as "Expo 2000" is concerned, I'm not sure. To me, it sounded quite different from the Dectalk sounds, so I would say it was a different machine. Furthermore, Kraftwerk usually work with their own customized setups and machines, so I doubt they used a (more or less) ready-made machine like this one. But that's just an educated guess.
Kraftwerk mainly work with Votrax systems, but I do remember seeing a credit for Dectalk on that album somewhere. I think "Sex Object" may have used it a bit? The voices on "Expo 2000" sound like some method of vocal resynthesis as opposed to synthesis by rule, maybe there was some Texas Instruments product involved?