Write code using speech recognition, a new barrier overcome in software development (with some drawbacks)



Alexa, Siri, Android Auto ... every time, our voice takes on greater weight in the way we interact with new technologies. In many cases, we do it for sheer convenience, but we can do it because others worked before out of necessity.



And it is that more and more technologies allow us to develop software through voice (that is, by issuing commands that allow manipulating the code and automating workflows), but it is possible because some developers they were faced before in the dilemma (affected by disabilities and injuries) of creating these systems or dedicating themselves to something else.



Serenade and Talon, leading the race



Ryan Hileman began developing Talon in 2017, when he was forced to quit his full-time programming job after suffering severe hand pain for a year.



I wanted to develop a 'hands free' for programmers that would allow "anyone to completely replace the mouse and keyboard."







Moving on from the keyboard? These applications help you to use your voice to compose your documents in Windows 10








IEEE Chart



Two years later, Matt Wiethoff, a software engineer at Quota, was diagnosed with a repetitive stress injury. He also had to quit your job and started working on his own voice programming platform: Serenade.



Wiethoff did so out of necessity, at the prospect of having to devote himself to something else that requires less typographical effort. A few months later Serenade raised $ 2.1 million in a seed funding round.



Serenade has a speech-to-text engine specifically developed to recognize code (and transcribe it into valid syntax), unlike its Google equivalent, designed for conversational speech recognition.



Talon, for its part, is made up of three main elements: a voice, noise and eye-tracking recognition system. These last two allow to completely replace the mouse: the first to use mouth clicks as a click, and the second to move the pointer. According to Hileman,




"That sound [de chasquido] it's easy to do: it requires little effort and requires low latency to recognize, making it a much faster, non-verbal way to click the mouse that doesn't cause vocal strain. "




There are differences in approach between the two, as can be seen in the attached graph; while Talon sounds very different from human conversational speech, with specific commands for any action, Serenade opts for a more abstract approach, in which the program assumes certain actions linked to each command, sounding like that much more natural (If you know English, of course).









I am blind from birth and this is my job as a computer developer





The bad? Miss the silence. And music.



Three years ago, the journal Nature covered the case of some of the pioneers in the field of voice programming. Harold Pimentel, an expert in computational genomics, suffered the same type of injury as Wiethoff ... caused, in his case, by the fact that he was born with only one arm. He and Naomi Saphra (suffering from small fiber neuropathy) began developing software, now discontinued, called VoiceCode.



Such software, like Talon, made it easy for users to create their own custom command settings for programming: Pimentel says that he had to learn 40 pages of commands, while his project partner was able, after two months of practicing, to handle LaTeX mathematical formulas.



But, along with the many obvious advantages offered by their software, Pimentel and Saphra also highlighted the disadvantages of programming with voice. First admitted having a throat problem and being forced to "drink a damn lot of water", in addition to missing programming in silence.



Interestingly, Saphra missed the opposite: "I used to listen to music or sing while writing code. Or just curse. I can't do that anymore. "



Via | IEEE Spectrum Images | Serenade & IEEE Spectrum