A good a place as any to get an idea for the sort of work Chaotic Moon does is a project called Mockzy AI. It's a web tool that takes texts, speeches and written records of famous figures from the past, and bundles it all into a search engine interface that generates conversations between any two characters. So, if you ever wanted to see (and hear) James Brown and Queen Victoria talking about how to grow carrots, or Ghandi discussing the merits of Hip hop versus Trap with Dickens then this is the tool for you.
Mockzy AI uses publicly available records of texts from each famous figure (the people featured were chosen due to the wealth of matierial), and feeds them to IBM's Watson, via its Bluemix platform. When you ask it a question or search term, responses are matched and ranked against the available prose. Mockzy then attempts to turn them into a conversation. The end result will depend on what you ask it. I'm fairly sure Ghandi never spoke about the iPhone, so terms like that are likely to be skipped in favour of the rest of the language. But, as an exercise in making machine learning and AI relatable to (living) humans, it's a fun way to do it.
What about something a little more intriguing? That would be another project called Larí. The napkin pitch is this: when we are about to speak, our brains send an electrical signal to the larynx. This is apparently also true if we just think about speaking (called subvocalization). It's a phenomena that NASA was looking into over 10 years ago, but not much has happened on the topic since. The Larí project is exploring the idea that that signal can be captured using and ECG, with pads attached around the larynx. It's not quite mind-reading, but it has similar applications.
While the project is in early stages, and more a proof of concept, Chaotic Moon thinks that if it's possible to remove the signals produced by the heart, and any other interference, it could be possible to capture the "digital signature" for words, build a library of these from multiple test subjects, and -- theoretically -- turn that into a system for translating thoughts into data that could be sent to machines. A basic use case would be for people who are unable to speak (assuming the brain-larynx signals are still there) using this to convert thoughts into words. Chaotic Moon has another sensory project called Sentiri, which is a headband covered in infrared sensors that gives directional haptic feedback that alerts to obstacles, allowing sightless navigation through a maze.
While these projects mix fun with some serious potential use cases, Chaotic Moon sometimes just likes to let it all hang out, or not, as it were. For fun, the team also were demonstrating Notifly. A gadget that sends a message to your phone when you're flying low. It's silly, but, well, we've all had a time when that would have been handy, right? Using an Intel Curie board and some basic circuit connections at the top of the zipper, it knows when you've done your top button up, and if 8 seconds later the zipper isn't at the top, you'll get a shame-saving message.
Chaotic Moon has plenty more projects on the go, including a VR game that mixes puzzle solving, with dragon slaying (naturally). Initially, this might seem like a generally fun thing to do in VR, but as with most of the projects here, there's a serious application also. The puzzle element puts you under stress, requiring you to solve a problem with a cool head, which if you don't achieve, results in serious consequences. This is basically the same conditions first responders have to deal with, so it's not hard to imagine this being adapted as an engaging tool either for training, or to keep skills sharp. For now though, we're looking forward to what the team imagines up for next year.
Mallory Johns contributed to this report.