mkt4intel-class

On October 27th we held our annual flagship conference “Machine Learning and the Market for Intelligence”.

We believe strongly in Science, Technology, Engineering and Mathematics (STEM) education as the foundation for a career path that leads to massively scalable science-based companies. So, alongside all-star researchers, strategic investors, and the CEO’s of Canada’s largest companies, we invited a group of sixth and twelfth-grade students to take in the day, meet with AI startups, and then asked them what they think is the most interesting application of artificial intelligence.

Here is a selection of their answers:

Hearing assistive technologies, such as Sense Intelligent, that replace hearing aids by using the microphone in a phone to amplify specific sounds. They help people with disabilities without making them pay thousands of dollars. I did not know that 5% of humanity is hard of hearing or deaf.”

Dashiell Brown
Age: 11
Grade 6

 

“I was able to test out and learn about multiple different products relating  to artificial intelligence. While I was able to try out may different products, such as a headpiece that would be constantly worn to detect seizures and a 3D printed mechanical prosthetic arm, my favorite product was the Scary Cabin website, by MashUp Machine. The reason I was most impressed with this piece of technology was not because I thought it was the most useful, the hardest to make or the one most relating to artificial intelligence but because when I used it, I had fun, it is more useful to a larger majority and connects the most with the user.

I felt as if I was actually controlling it. The program allows the user to create its own version of a base story, manipulate the camera angles, panels, etc. When I was watching my final story, I felt as if I was watching something I had actually made, as opposed to simply putting in a few details. I was in control of what I told the program to do and it was able to understand me, no matter how colloquially or formally I phrased my sentences. Everything I did was understood and made my own story feel personal. Due to this, it felt as if I was not only controlling, but connecting and communicating to the artificial intelligence, instead of just controlling it.

I believe that this AI program was my favorite program due to its ability to please and entertain a larger group of people, its ability to connect to the user and because as I was using it, I was actually having fun and was, for a moment, completely immersed in the story I was telling, with no regards for things around me.”

Caroline Graham
Age: 14
Grade 9

 

“The company Meta curates the relevant information from the thousands of scientific papers published daily using AI in what I thought to be the most practical and interesting way. This has clear applications for anyone who is conducting ground-breaking research and wants to stay up to date on any breakthroughs that could influence their research. In certain fields, e.g., biomedical engineering, there are a great number of research papers published daily, and so it is obviously impractical to have someone read every single one in order to determine if any of the information has any relevance. This elegant piece of software solves this problem and offers their basic services free of charge to users.

Let’s start by analyzing the NLP algorithm they use. They get the papers in a raw text format, meaning the text is not accompanied by any indicators of what certain ambiguous terms mean. Luckily, the nature of scientific papers is to be fairly technical and not use many ambiguous terms to begin with. I predict semantics are not being factored, but rather just the raw discovery that is made or technology developed. NLP uses context to determine what the given set of text means.

This is where the magic happens. It is difficult to determine exactly which type of neural network they are using, the number of neurons they use, the learning rate, and the number of layers to the net, as that would essentially give me enough to replicate their entire software. Though, I can safely assume that they determine the specific needs of the user through an analysis of the questions they answer when they register along with the research they are conducting. Then they take those criteria to train their machine learning algorithm against the input from the NLP algorithm.

The technology that Meta has developed is truly remarkable and has very practical applications. I found Meta’s work to be the most interesting application of Artificial Intelligence at the conference.”

Siddhant Jain
Age: 17
Grade 12

 

Drones for use during baseball games. The drones would analyze live video using a 250-core GPU on board to target and track baseballs. I had considered machine learning for use on a wide scale, but this application was instead extremely specialized and niche. I also learned about some of the other potential uses for the object recognition software, whether it be spotting and capturing birds, tracking projectiles like balls for sports analytics, and more.

Most importantly, I learned that what will be really special to see in the world of tomorrow won’t be the large-scale automation seen on the streets with autonomous cars or in factories, but the small, niche applications of machine learning to better the world a little bit at a time. “

Harold Dong
Age:17
Grade 12

 

A dog robot named Amelia with touch sensors on the top of its legs that are partly 3D printed, two sets of microphones, one where each ear would be so that so it gets an idea of where sound comes from, video cameras to see and servo motors on her legs and tail so she can move. She can’t walk or run but can sit up, stand (like a dog), lie down and wag her tail. These features give her 3 out of the 5 senses. She has hearing, sight and touch but is missing taste and smell. She also has a LED collar that can change different colors.

Amelia’s machine learning algorithm allows her to learn to do things instead of being programmed to do it. She also has feelings, memories, and emotions. Her collar’s colors display her feelings. She can be sad but she always goes back to being happy over time. She will be scared if you hurt her by pressing hard on her sensors or cover her video camera because she can’t see. Her memories are important because she is more likely to repeat a happy memory than a sad one. For example, if she remembered lying down and being happy and also remembered looking at the ground and being scared because she couldn’t see, she would be more likely to lie down because that was happy.

The importance of Amelia is to have machines that think like humans and animals because currently animals think a lot more like humans than machines do. If machines start to think like humans and animals you could have a pet robot dog, which would be a lot easier to train. You could erase any memories of it disobeying you and you wouldn’t have to train it to go to the bathroom outside. Overall, it would probably be cheaper because you would have to charge it instead of buying it food. Also, you could have a pet that could talk to you or have super senses or something like that because it is a robot.”

Quinn Agrawal
Age: 11
Grade: 6

 

Speech recognition already works by splitting audio data into phonemes. Which means that the word “cat” would register as a K, a soft A, and a T, which the software knows to be “cat”. The program has to be taught this. The way a human could find out how a word is pronounced might be to look in a dictionary, where they’d see “/kæt/” written beside “cat”. This is the word “cat” expressed in the International Phonetic Alphabet (IPA), a fully-formed, well-documented, alphabet that can describe any sound humans make with their mouths. IPA can also be used to describe dialectic differences, for example, the word “bath” changes from /bæθ/ in American English to /bɑːθ/ in Australian English. One of my favourite charts actually describes all of these vowel differences for most English dialects and can be found here: https://goo.gl/Y9mE3J

Here’s my idea: if a speech recognition program outputted not words, but the phonemes it heard (in IPA)*, as long as it was taught the difference in between English accents, it could figure out what kind of English the speaker is speaking then use a dictionary for that accent, because the pronunciations for every word in the language are all right there in the dictionary and don’t have to be taught. This can be extended even further, because once you have a computer that can recognize all IPA sounds, you could hook up a Spanish dictionary with IPA versions of all the words in Spanish and now your speech recognition program can start learning and improving its Spanish dictation. You didn’t even need to hire a single Spanish speaking person. “

Ariel Gans
Age: 15
Grade 11

 

 

Posted in Blog Post