To start, users uploading data to the AI rely on a "garbled circuits" approach that takes their input and sends two distinct inputs to each side of the conversation, hiding data for both the user and the neural network while making the relevant output accessible. That approach would normally be too intensive if it was used for the entire system, though, so MIT uses homomorphic encryption (which both takes and produces encrypted data) for the more demanding computation layers before sending it back to the user. The homomorphic method has to introduce noise in order to work, though, so it's limited to crunching one layer at a time before transmitting info. In short: MIT is splitting the workload based on what each side does best.
The result leads to performance up to 30 times speedier than what you'd get from conventional methods, and promises to shrink the needed network bandwidth by "an order of magnitude," according to MIT. That could lead to more uses of internet-based neural networks for handling vital info, rather than forcing companies and institutions to either build expensive local equivalents or forget AI-based systems altogether. Hospitals could teach AI to spot medical issues in MRI scans, for example, and share that technology with others without exposing patient data.