IEEE Standards Association

Engadget Editorial Policies

The unique content on Engadget is a result of skilled collaboration between writers and editors with broad journalistic, academic, and practical expertise.

In pursuit of our mission to provide accurate and ethical coverage, the Engadget editorial team consistently fact-checks and reviews site content to provide readers with an informative, entertaining, and engaging experience. Click here for more information on our editorial process.

Stories By IEEE Standards Association

  • The Future is in the Ear of the Beholder

    By Simon Carlile and Stuart Karten Seeing might be believing, but for many people, hearing is being there. Our sense of being in a space, and our awareness of objects in that space, are strongly driven by our auditory perception of that space—far more so than our visual perception. Think how 3D audio creates an immersive sense of being someplace. That's one reason why in the near future, augmented reality (AR) glasses and other interfaces could be supplanted by "hearable" devices that people wear in their ears like a Bluetooth ear bud (think the Bragi Dash). Hearables would be designed to listen all the time, just like Amazon Echo and Google Home do today. But as a wearable device, hearables would provide those kinds of virtual assistant services wherever the user is. Hearables also would be sophisticated enough to know what to listen for and how to act on it, all without requiring the user to speak a prompt first. For example, hearables could: Detect a noisy environment and respond by turning on speech-enhancement processing so the user doesn't have to struggle to understand what people are saying. Automatically translate foreign languages. Have a virtual assistant feature that listens for key words and then provides uses contextual information. These examples aren't blue sky, either. They're all features that are possible today or within just a few years. These features also are just the tip of the iceberg of what can be achieved with such ear-level devices. Connecting to Your Smartphone and Your Brain All of these examples show how hearables can augment the reality of daily life in ways that enable people to focus on what they're doing instead of being distracted, such as by trying to sneak a glance at their smartphone to look up that information manually. Hearables could be particularly attractive to people who find it easier to understand and retain information when they hear it rather than read it. Hearables are like having a virtual assistant who's always there to help. By rendering the assistant at a virtual position in space, you can choose to ignore them or focus your attention on them as if they were just another person in the conversation. Today, hearables complement the smartphone, but in the 5G future, they will likely replace them. Current-generation hearables connect to the user's smartphone, but instead of simply serving as a mic and speaker for the smartphone, next-generation hearables would be sophisticated enough to perform a variety of tasks on their own or connect directly to services in the cloud (think Alexa's "skills"). The smartphone just provides the cellular or Wi-Fi connection. With its own wireless modem to connect directly to a cellular or Wi-Fi network, the hearable doesn't need a smartphone, at least for those who can live without a touch screen and find it more convenient just to ask their hearable for something rather than pecking it out on a virtual keyboard. Although users could simply ask their hearable for something, that probably won't be the only interaction option. We're among the people developing electroencephalography (EEG) technologies that would enable hearables to analyze their user's brain waves to identify what they want or need—and when. EEG-based "pass thoughts" rather than passwords will transform the security model. Mental gestures are reflected in changes in the EEG patterns and can supplement or supplant physical gestures. Understanding listener intent will be critical in managing the interface. For example, when you're completely absorbed in a conversation, your EEG would effectively tell the hearable, "Don't bug me right now." Hearables also will act on input from their surroundings. Museums are one current example of how architects, interior designers, and others are using the Internet of Things (IoT) and other technologies to equip everything from buildings to artwork with the ability to provide AR experiences. Today, those devices are primarily smartphones, but AR headsets will become increasingly utilized. Tomorrow, they'll include hearables, which will leverage those AR technologies and use cases, and then take them to the next level. Leveraging the Environment While Blending In The trick for hearables designers is to take all of these types of existing and emerging information sources and present them in ways that are intuitively and cognitively meaningful for users. EEG can provide a window into the wearer's intent and identify which information a person needs at a particular moment in a particular place (for instance, the wearer's focus of attention), and then seeing what's available from that environment's IoT devices, the Internet or both. Another challenge is coming up with form factors that people are willing to wear—something that designers of AR glasses have struggled with for the past few years. For starters, hearables have to be comfortable enough for all-day wear. They will also have to complement or augment the real world and not block it out (be open), unlike current headsets and headphones. Hearables also have to be inconspicuous or conspicuous rather than somewhere in between. In other words, people who don't want to look like geeks might prefer a hearable that can be hidden inside their ear, like today's hearing aids, or at least look no different than today's Bluetooth headsets. Other people will prefer designs that showcase their technology choice, the way that the Beats logo or the AirPod form factor have a cachet. Skeptics might scoff that most people will have zero interest in wearing a hearable, even one as discreet as a hearing aid. But many people wear hearing aids because they enhance their environment, something that hearables will do—albeit in different, even richer ways. And many, many other people wear Bluetooth headsets, a device that didn't exist less than a generation ago. Hearables will leverage the fact that tastes can evolve as fast as technology. Intrigued? We will provide insight on this topic at the annual SXSW Conference and Festival, 10-19, March, 2017. The session, Hearables and the Age of Mediated Listening, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org.

    By IEEE Standards Association Read More
  • Beyond Moore's Law

    By Tom Conte, IEEE Rebooting Computing Initiative Co-Chair and Professor in the Schools of Electrical & Computer Engineering and Computer Science at the Georgia Institute of Technology Most people believe Moore's Law says computer performance doubles about every 18 months. Not so. Intel co-founder Gordon Moore actually meant that transistors will get cheaper with each generation. So since 1965, when Moore first described this trend, manufacturers of PCs, cell phones and other devices have been able to buy chips with twice as many transistors, but for the same price they paid a year and a half earlier. This trend democratized computing by giving consumers access to devices and services that otherwise wouldn't be mass-market affordable. But this trend began losing steam in the mid-1990s, when sending signals via long wires on a chip began to dominate computer circuitry. As a result, microprocessor designers began using superscalar techniques, where multiple instructions run in parallel on multiple hardware circuits. This sped up programs without requiring any software changes. This approach worked well until 2005, when that generation's chips hit a power density of 200 watts per square centimeter. (Some perspective: A nuclear reactor core is about 100 watts per square centimeter.) At that point, chips couldn't be cooled cost-effectively, so the industry switched to a multi-core architecture. These chips have multiple processors, each running slower than a single one so they're easier to cool. They can run multiple programs in parallel. But to use these multi-cores to speed up a single program, the programmer has to re-engineer the software to use a parallel algorithm. What's Next? Even so, multicore is a Band-Aid. To keep delivering the advances in speed, battery life and capabilities that Moore's Law has conditioned consumers to expect, a fundamentally different approach to computing is required. IEEE created the Rebooting Computing Initiative to study these next-gen alternatives, which include: Cryogenic computing. Cooling circuits to nearly absolute zero allows superconducting to occur. This enables highly energy efficient computers while still supports the current programming model by enabling superscalar processors that can run independent instructions in parallel. Cryogenic computing would be used in infrastructure such as servers rather than cell phones and laptops. Reversible computing. Computers and cell phones take multiple sources of information and reduce it to a single answer. All of that other input is wasted, a process that generates the heat you feel when your phone is doing a processing-intensive application. Reversible computing recycles that energy so it can be used on the next task, thus saving electricity and battery life. The catch is that the industry doesn't know how to make complex computer circuits reversible. It will take billions in investment and a decade or more of R&D to make reversible computing viable. Special-purpose hardware. Most of today's compute devices are general purpose, meaning they can do a lot of different things reasonably well based on what the programming instructs. An exception is graphics processing units (GPUs), which are designed to do one thing very well. This concept can be applied to many other tasks, with the benefit of being far more energy efficient. But wider use of these special-purpose devices would require a paradigm shift on the programming side, where code currently is written for general-purpose hardware. Quantum computing. Today's chips handle data in the form of ones and zeros. Quantum computing can use additional states, providing flexibility to do more tasks simultaneously. But making these systems operate reliably is a challenging engineering problem. It will take at least another decade to refine the hardware. Even then, quantum computing only works for a few, albeit important, applications. Neuromorphic computing. Like quantum computing, this approach is so radically different than today's programming models and circuit architectures that it will take moon-shot-level investment and breakthroughs to become viable. Neuromorphic computing's name reflects how it strives to achieve the human brain's energy efficiency and ability to rewire itself. The catch is that we're still learning how the brain does all that, meaning it will be many years before we can make chips even close to the brain's energy efficiency and compute capabilities. For some applications, such as the Internet of Things, one short-term solution is to move more computing tasks out of devices and to the cloud, where power consumption and heat are less of a concern. This strategy leverages the growing availability of high-speed, low-latency networks. But like multicore, offloading to the cloud ultimately is a Band-Aid because data centers also need to be green. So eventually they, too, will need to use one or more of the aforementioned next-gen computing architectures. If you want a deeper dive into these and other potential alternatives, check out the presentations and papers at http://rebootingcomputing.ieee.org. They're the future of computing. About the Author Tom will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, Going Beyond Moore's Law, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org. Tom Conte is a Professor of CS and ECE at Georgia Institute of Technology, where he directs the interdisciplinary Center for Research into Novel Computing Hierarchies. Since 2012, Tom has co-chaired (along with Elie Track) the IEEE-wide Rebooting Computing Initiative that has as its goal to entirely rethink how we compute, from algorithms down to semiconductor devices. He is also the vice chair of the IEEE International Roadmap of Devices and Systems (the successor to the International Technology Roadmap of Semiconductors). He travels around the world giving talks about how shifts in technology and the slowing of Moore's Law are about to cause a dramatic shift in how we compute. Tom is the past president of the IEEE Computer Society and a Fellow of the IEEE.

    By IEEE Standards Association Read More
  • Virtual, Augmented, and Mixed Reality - Promise and Peril for Education

    By: Todd Richmond, Ph.D., Director, Mixed Reality Lab/Studio, USC Institute for Creative Technologies/School for Cinematic Arts Virtual Reality (VR) is all the rage right now. The hype cycles are expanding, technology evangelists are elevating it to sliced bread status, and investment money is pouring into startups as well as new efforts within established companies. But for someone who lived through the dot com bubble, and has been knee-deep in the goo of immersive developments for some time now, it certainly feels like we're going to party like its 1999 all over again. If there is a VR bust in 2017, never fear – it won't go away. Just as the internet didn't fade into obscurity following the dot com collapse, VR (as well as Augmented Reality [AR]) will remain and eventually become part of our landscape. Immersive mediums (Virtual – Augmented - Mixed Reality (VAMR), are the third new communication and collaboration medium of the new millennium (internet and mobile being the other two). I do prefer to think in terms Mixed Reality (MxR) as both VR and AR are inherently mixed since a human is at the end of the piece of technology, and it can be deeply experiential. That fact is often lost on some developers however, as the relationship between a human and immersive content is complex and still not well understood. Far easier to focus on latency, resolution, and other quantifiable technical metrics and goals rather than "soft" factors such as empathy, efficacy, and ethics. That said, VAMR and academia/education will have critical relationships in several directions going forward, particularly as consumers, creators, and experiential innovators. Educational institutions will clearly embrace VAMR as a technology capability, and thus create a market in the commercial sector for creating new content and experiences that augment and/or replace existing curriculum. For instance, Cardboard viewers are already being used in classrooms to provide "virtual field trips", and medical schools are looking at using AR and/or VR for physician training and education – everything from augmented displays for teaching basic anatomy to using virtual humans to train patient interview skills. Unfortunately, tech in the classroom has often been pursued for the sake of "new" rather than based on pedagogical needs and desires. Part of the challenge is that new tech does require experimentation, both by those who create it, and then the organizations and teachers who will need to implement it. But all too often experiments get rebranded as solutions, and then fail to produce desired result, or more likely, are trying to solve a problem that doesn't necessarily exist for the educator. This was the case when mobile became a focus as the smartphone era began. For example, Powerpoints used for training were converted to pdf, then put into a smartphone, with the promise of "mobile education". This porting of old models to new mediums rarely works, and it is critical to not assume that early experimentation results in a vetted solution. Many challenges will accompany this move to VAMR, with the most fundamental being a complete rethink of what constitutes a classroom experience, and how virtual and physical can coexist in meaningful and effective ways. Perhaps more importantly, academia must play a role in helping to figure out the "why" of VAMR. As business focuses on selling product and content, someone else needs to work on understanding the deeper meanings of what it means to be human in an increasingly virtual set of worlds, and explore possible unintended consequences. VAMR content and experiences are very much in the early stages of development (analogous to movies in the early 20th century), though we *expect* them to be better since film and games are so advanced, and we think that we can port the old content models to the new medium. But immersion is a different relationship with the user, and the new sense of agency that end users wield massively complicate content development (e.g. it breaks traditional linear narrative). VAMR is driven by digital – all those 0s and 1s being generated, manipulated, transported, remixed, and consumed. Part of the challenge is that humans are not digital, but rather are profoundly "analog" – we are part of the physical world that is continuous and monitored by our highly evolved multiple senses that digital only roughly approximates. So, that relationship between the digits and the human is challenging to navigate and understand. Being able to make experiences meaningful rather than just "gee whiz" remains more art than science. I view humans and digital as oil and vinegar – they don't mix and there are no solutions, but if you shake them up in the right combination, you can get a tasty salad dressing – which is an emulsion. The problem is if you stop shaking, eventually they separate. But just as a little egg yolk turns oil and vinegar into mayonnaise – a stable emulsion – there perhaps are ways to help bridge and bind the analog and digital to help create meaning. Emulsional Reality serves as an effective conceptual framework for VAMR development, and I spend my days looking for "egg yolks" – those tools/techniques/approaches that can form a stable emulsion between the human and the virtual. We think story can be one of these binders, and can help make for stable and meaningful human-digital experiences. VAMR will touch and change every aspect of society one way or another. We need to figure how to leverage that power in ways that will help us thrive as humans. We should experiment tirelessly and move towards digital experiences that work for humans, as well as strive to understand why it is important. VR/AR will go from a novelty (as now) to a market (in the next few years) to a commodity capability (like chairs and tables). Getting to that point will take technical advances, content/context experimentation, and deep thinking about the morals, ethics, and deeper meanings that are presented by these immersive virtual worlds colliding with humans. About the Author By day, Todd Richmond is the Director of Advanced Prototypes at the University of Southern California's Institute for Creative Technologies (ICT). By night he is a musician, visual artist, and conceptual trouble maker. Todd coined the terms "Emulsional Worlds" and "Emulsional Reality" to describe the challenges humans face in an increasingly virtual world and how analog and digital can coexist. Todd will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, AR/VR: The Promise and Danger Behind the Hype, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org

    By IEEE Standards Association Read More
  • Giving Dogs A Voice

    By: Thad Starner, Professor of Computing at Georgia Institute of Technology and a Technical Lead on Google's Glass A dog walks up to a woman and says "Follow me. My owner needs your help." The woman stops and stares at the dog, disbelieving what she has just heard. Fortunately, the service dog is trained for this situation, and tugs on his wearable computing vest again, causing it to repeat "Follow me. My owner needs your help." The dog trots back to its owner with the woman following. Soon she discovers that she has been part of a Georgia Tech experiment to determine which pre-recorded messages are the most effective for service dogs requesting help from strangers. Facilitating Interactions for Dogs with Occupations (FIDO), is a collection of projects by my colleague Professor Melody Jackson and I that use wearable computers to help man's best friend communicate. Service dogs currently alert their owners or caregivers about incipient seizures for people with epilepsy or fainting due to dangerously low blood sugar levels for those with diabetes. While the owner of a diabetes dog might know that her dog stands on its hind legs against her when alerting to low blood sugar, a stranger would probably think the dog was being overly friendly. Wearable computers can empower service dogs to better express themselves to humans. To provide an alert, the diabetes dog tugs on an elastic tug sensor mounted to his FIDO vest, which triggers a pre-recorded response designed to recruit help from humans in the vicinity. In more complex scenarios, the dog may select from one of several different inputs to better communicate with long-term human partners. For example, bomb and drug detection dogs are trained to sit or lie down when they discover a substance of interest. Wearable computers can enable these dogs to specify what type of substance they discovered. Tugging on a sensor on the left side of a FIDO vest might indicate an unstable peroxide bomb, while biting a sensor on the other side might indicate a stable compound, like gunpowder. Including a GPS unit allows such service dogs to work at a distance. When the dog triggers the wearable, the dog's location is automatically sent to the handler. Furthermore, depending on the type of bomb the dog signals, the handler might decide to recall the dog to avoid accidentally triggering it. A FIDO vest can include a speaker so that the handler can give the dog verbal commands remotely. Or, for situations when silence may be needed such as with military dogs, vibration motors in the vest might provide the dog tactile, instead of verbal, commands. In the field of Human Computer Interaction (HCI), we talk about the affordances perceived by the user. For example, when presented with a door knob, humans try to turn it, push it, and pull it (among other actions) until we determine how to open the door. What affordances does a dog perceive when most door knobs are not even reachable? Because computer interfaces were designed for humans, we know little about how to create appropriate affordances for computing for dogs. The FIDO project explores different affordances to see which are most intuitive (and most trainable) for dogs. Certainly using a mouse to control the ubiquitous Windows, Icons, Menus, and Pointer (WIMP) interface seems to have little future—dogs have no fingers to click the left mouse button! Also, dogs seem to have little need for the type of in-depth computer use that humans require. Instead, FIDO vests are designed for quick communicative actions. FIDO has examined tugs using stretch sensors, nose touches using conductive textiles, nose swipes using proximity sensors, and bites using capacitive, resistive, and pressure sensors, to name a few. In the non-wearable domain, FIDO has examined the use of nose touchscreens for service dogs to summon help when at home. Surprisingly, when presented with multiple icons to touch in sequence to dial 911, the dogs invented swiping and multi-touch (nose plus a front paw) gestures in order to hit the icons more quickly and get their reward faster! Our canine participants have discovered better interaction techniques repeatedly in our testing, often leading us to better interfaces than we originally conceived. Embedding sensors in dog collars provides yet another means of communicating with dogs. Normally hearing assistance dogs lead their owners to the source of a noise, whether it is the doorbell or a baby crying. What should the dog do in the case of a tornado alarm? One option is to train a dog to make a specific gesture, such as spinning in place, to indicate such alarms. If the owner is nearby, the gesture is obvious, but if the owner is outside or in a different room, motion sensors in the collar can detect the gesture and trigger a vibration-based alert in the owner's mobile phone or smartwatch. Similarly, such a system could be used remotely by search and rescue or bomb and drug detection dogs to indicate when they have found the desired target. Collar-based motion sensors might even help pets. Just as wearable fitness trackers help humans understand when they are being too sedentary, a dog's fitness tracker might help owners discover that not enough exercise leads to undesirable behaviors, like shoe chewing, when they are not at home. Some pet owners might go further and train their pets to communicate with gesture. As with every technology though, canine computing must strike a balance between useful information and information overload. Personally, there are only so many "Squirrel!" alerts I need from my pet throughout the day. About the Author Thad Starner is a wearable computing pioneer and has been wearing a computer with a head-up display as part of his daily life since 1993. Thad will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, Not Your Mama's Wearables, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org. Dr. Starner is a Professor of Computing at Georgia Institute of Technology and a Technical Lead on Google's Glass. Thad is a founder of the annual ACM International Symposium on Wearable Computers, now in its 21st year, and has produced over 450 papers and presentations on his work. He is an inventor on over 90 United States utility patents awarded or in process. For over two decades, Starner's work has appeared in national and international public forums, including CBS's 60 Minutes and 48 Hours, ABC's Nightline, PBS's News Hour, CNN, the BBC, National Geographic, The New York Times, New Scientist, and The Wall Street Journal.

    By IEEE Standards Association Read More
  • The Ultimate Skinner Box: Clinical Virtual Reality 1990-2016

    By: Skip Rizzo, Ph.D., Director of the Medical Virtual Reality Lab, University of Southern California Institute for Creative Technologies, Los Angeles, CA. The last decade has given rise to a dramatic increase in the global adoption of innovative digital technologies. This can be seen in the rapid acceptance and growing demand for mobile devices, high speed network access, smart televisions, social media, hyper-realistic digital games, behavioral sensing devices, and now the 2nd coming of Virtual Reality! Such consumer driven technologies that were considered to be visionary just 10 years ago have now become common and increasingly essential fixtures in the current digital landscape. At the same time, the power of these technologies to both automate processes and create engaging user experiences has not gone unnoticed by behavioral healthcare researchers and providers. In fact, it was during the "computer revolution" in the 1990's that promising technologically-driven innovations in behavioral healthcare had begun to be considered and prototyped. Primordial efforts from this period can be seen in R&D that aimed to use computers to enhance productivity in patient documentation and record-keeping, to deliver "drill and practice" cognitive rehabilitation, to improve access to care via internet-based teletherapy, and in the use of virtual reality simulations to deliver exposure therapy for phobias like fear of heights, flying, and public speaking. The clinical use of VR was especially compelling back in the early-to-mid 90's as clinical scientists, dissatisfied with the limited old school methods of medical practice, psychotherapy, and rehabilitation began to get excited by the potential power of the computer revolution for reshaping and improving clinical care and research. At this time, VR was seen as not simply a case of automating the paradigms of the past with computing, but as way to create highly realistic, interactive, and systematically controllable stimulus environments that users could be immersed in and interact with to support clinical assessment and intervention. In this regard, VR was seen as an advanced form of human–computer interaction that allows the user to ''interact'' with computers and digital content in a more natural or sophisticated fashion relative to what is afforded by standard mouse and keyboard input devices. And with immersive VR, produced by the integration of computers, head mounted displays (HMDs), body tracking sensors, specialized interface devices and real-time graphics, patients could be immersed in a computer generated simulated world that changed in a natural or intuitive way with head and body motion to provide novel opportunities for clinical purposes. From this, VR was seen to offer the potential to create systematic human testing, training and treatment environments that allowed for the precise control of complex, immersive, dynamic 3D stimulus presentations, within which sophisticated interaction, behavioral tracking and performance recording was possible. Much like an aircraft simulator serves to test and train piloting ability under a variety of controlled conditions, VR could be used to create relevant simulated environments where assessment and treatment of cognitive, emotional and motor problems can take place under a range of stimulus conditions that are not easily deliverable and controllable in the real world. When combining these assets within the context of functionally relevant, ecologically enhanced virtual environments, a fundamental advancement was envisioned to emerge in how human assessment and intervention could be addressed in many clinical and research disciplines. This "Ultimate Skinner Box" was what human experimental researchers and clinicians had always dreamed of, whether they knew it or not! And this was the "vision" that drove the enthusiasm for "Clinical VR" in the 1990's! But it wasn't an easy road getting to a place where we could manifest that vision! For example, when discussion of the potential use of VR applications for human research and clinical intervention first emerged, the technology needed to deliver on this vision was not in place. Consequently, during these early years VR suffered from a somewhat imbalanced "expectation-to-delivery" ratio, as most who explored VR systems during that time will attest. Computers were too slow, computer graphics were primitive, 3D user interface devices were awkward and required more effort than users were willing to expend to learn how to operate effectively with them, and head mounted displays (HMDs) were costly, bulky, and had limited tracking speed, resolution and field of view. Thus, in 1995, VR experienced its own nuclear winter as the public became disenchanted with the quality of a typical VR experience and the technology languished for many years in what the Gartner Group has termed "the trough of disillusionment". In spite of this, the vision of Clinical VR was sound and VR "enthusiasts" continued to plug away at the R&D needed to advance the technology and document its added clinical value. And, over the last 20 years, the technology for creating VR systems gradually caught up with the vision of creating compelling, usable, and effective Clinical VR applications. This period saw dramatic advances in the underlying VR-enabling technologies (e.g., computational speed, 3D graphics rendering, audio/visual/haptic displays, user interfaces/tracking, voice recognition, intelligent agents, and authoring software, etc.) that now supports the creation of low-cost, yet sophisticated, immersive VR systems that are capable of running on commodity level computing devices. In part driven by the digital gaming and entertainment sectors, and a near insatiable global demand for mobile and interactive networked consumer products, such advances in technological "prowess" and accessibility have provided the hardware and software platforms needed to produce more adaptable and high-fidelity VR scenarios for the conduct of human research and clinical assessment/intervention. Thus, evolving behavioral health applications can now usefully leverage the interactive and immersive assets that VR affords as the technology continues to get faster, better and cheaper moving forward deep into the second decade of the 21st Century! Moreover, a significant scientific literature evolved, almost under the radar, since the 1990's indicating positive outcomes across a range of clinical applications that leveraged the assets provided by VR. Such scientific support for the clinical efficacy and safe delivery of VR-delivered care served to also inspire the current view that technologic innovation may also help reduce the escalating healthcare costs that have become one of the hallmarks of post-industrial western society. A short list of areas where Clinical VR has been usefully applied includes: Fear reduction in those with specific phobias Treatment for PTSD, depression, and paranoid delusions Discomfort reduction in cancer patients undergoing chemotherapy Acute pain reduction during wound care and physical therapy with burn patients and in other painful procedures Body image disturbances in patients with eating disorders Navigation and spatial training in children and adults with motor impairments Functional skill training and motor rehabilitation in patients with central nervous system dysfunction (e.g., stroke, TBI, SCI, cerebral palsy, multiple sclerosis, etc.) Assessment and rehabilitation of attention, memory, spatial skills and other cognitive functions in both clinical and unimpaired populations To do this, Clinical VR scientists have constructed virtual airplanes, skyscrapers, spiders, battlefields, social settings, beaches, fantasy worlds, and the mundane (but highly relevant) functional environments of the schoolroom, office, home, street and supermarket. In essence, VR environments can now be created that mimic real or imagined worlds and apply them clinically to immerse patients in simulations that support the aims and mechanics of a specific assessment or therapeutic approach. As a result, there is a growing consensus that VR has now emerged as a promising tool in many domains of clinical care and research. As we look to the future, we see Clinical VR as one of the larger domains of general VR usage. In the recent Goldman Sachs market analysis looking at the future of VR into 2025, we of course see that Gaming and Entertainment garners the largest market share. While that is to be expected with the public's chronic demand for new and better ways to consume media, the little noticed item in that market analysis is that "healthcare" comes in second for the VR market share. This is not a surprise to folks who have worked in this area over the years, especially as we see healthcare costs becoming one of the largest line items in the US Govt. budget, after Defense. Entrepreneurs have also taken note of this as the number of new clinically-oriented VR start-ups in the last two years outnumbers the total for the previous 20 years! And the exciting and scientifically-informed innovation in Clinical VR we have seen thus far is just prelude! In addition to the refinement and expansion of existing Clinical VR systems, the next generation of these applications will leverage the powerful advances in Virtual Human (VH) technologies to support credible interactions between patients and VH agents. But we will save the discussion of this domain of Clinical VR for a future installment of this blog! About the Author Albert "Skip" Rizzo is the Director for Medical Virtual Reality at the University of Southern California Institute for Creative Technologies and has Research Professor appointments with the USC Dept. of Psychiatry and Behavioral Sciences, and at the USC Davis School of Gerontology. Dr. Rizzo conducts research on the design, development and evaluation of Virtual Reality (VR) systems targeting the areas of clinical assessment, treatment and rehabilitation. This work spans the domains of psychological, cognitive and motor functioning in both healthy and clinical populations. Skip will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, AR/VR: The Promise and Danger Behind the Hype, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org

    By IEEE Standards Association Read More
  • The Cultural Ramifications of Ubiquitous AI

    By John C. Havens, Executive Director, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems Playdates. As a parent, we all want our kids to interact well with other children and when they make a friend at preschool or on the playground it's a great opportunity for them to have a playdate at someone else's house. But beyond working out the logistics of dropping off your child at someone else's home, there is also a verbal exchange between parents before the playdate takes place as a form of cultural etiquette. These conversations typically involve issues of safety but oftentimes also reflect the values of the two families involved in the playdate. For instance: If a child has food allergies, this needs to be stated before snack time (safety). If one child's family is vegan (by choice), this is also typically mentioned (values). If a movie is going to be shown, parents typically mention the rating or content to make sure the other parent is okay with the film – "Do you show the first scene in Finding Nemo or not?" (Values). These scenarios may seem common but if they're avoided can wreak havoc on a relationship with another parent that could affect your standing in your circle of friends or community. Fast forward to these same types of scenarios with companion robots in your home. These already exist in most people's houses in the form of Siri or an Amazon Echo. They aren't designed in human form, but they analyze and project data about you and your loved ones that is also shared with the cloud. But devices like Jibo or Pepper, robots designed to be spoken to and analyze emotion, will more overtly amplify these cultural scenarios for parents and consumers in the near future. Here's an example. Let's say it's 2020, and your eight-year-old daughter has just been dropped off from a friend's house after her playdate. She turns to you and says, "Christine's robot said I looked sad." When you ask your daughter to elaborate, she explains that the companion robot in her friend's house kept saying things to her like, 'Did you have a good day today?' and 'Your eyebrows look angry.' And before she left to come home, the robot said, 'I'm sorry you're so sad, Julie. You should be happy.' This scenario may sound far fetched, but there's already a precedent for it happening from products like Mattel's Barbie, outfitted with basic Artificial Intelligence algorithms designed to encourage children to feel that the doll is real. Here's a sample of dialogue from an interaction between Barbie and a young girl as reported by the New York Times in, Barbie Wants to Get To Know Your Child by James Vlahos from September of 2015: At one point, Barbie's voice got serious. ''I was wondering if I could get your advice on something,'' Barbie asked. The doll explained that she and her friend Teresa had argued and weren't speaking. ''I really miss her, but I don't know what to say to her now,'' Barbie said. ''What should I do?'' ''Say 'I'm sorry,' '' Ariana replied. ''You're right. I should apologize,'' Barbie said. ''I'm not mad anymore. I just want to be friends again.'' Whereas designers from Mattel may have the best intentions in creating this type of technology, it provides a perfect example of how manufacturers of AI are inherently making ethically oriented decisions for consumers with their products. For example, how would you feel if your child on a first playdate got the advice attributed to Barbie in this example from another parent? You might be appreciative because you feel it was smart advice or you might be angry because you just met the parents and felt it was inappropriate for them to provide this type of counsel without understanding your thoughts on the matter. At a deeper level, in the example provided where a doll calls a child, "sad," if a manufacturer, well intentioned or not, does not have psychologists along with affective computing (emotion) experts creating their technology, they may not understand the full ramifications of having a device tell a child about their emotions. This is especially true when a child may feel ashamed because the robot states these things in front of a friend or their parents. The good news is many AI and device manufacturers are aware of these types of issues and are actively building systems to take in cultural ramifications to avoid unintended consequences. And along with the technical aspects of these safeguards (providing specific privacy settings for families to adjust based on their preferences, re: sharing data, etc) most also understand the cultural implications these devices will have when entering people's homes. Here's a good example of a technologist planning ahead regarding these ethically oriented issues – Dr. Edson Prestes of Brazil has identified in his research the importance of cultural relevance for robots based on where a person is from. For instance, if a robot is built to have a face and eyes reminiscent of a human being, where should it look when speaking to a human? In the United States, robots should most likely be designed to look into someone's eyes as this denotes integrity. However, in many Eastern cultures a robot's eyes should be designed to look towards the floor as a sign of deference and respect. All of these examples point to the need for ethically aligned design for consumer products outfitted with Artificial Intelligence. And since robots are simply the external form of a product often imbued with AI, this means that all products should be created using ethically aligned design methodologies in the algorithmic age. Beyond normal processes of ensuring basic physical safety, when manufacturers take the time to identify and build to end-user values and cultural considerations they'll not only decrease risk but beat out competitors who haven't built products that imbue trust with consumers via these methodologies. In the sense that honoring end user values in this way is a form of sustainability for human wellbeing, a shorthand for thinking about this is, "ethics is the new green." In the same way we need to protect the planet, we also need to prioritize the people utilizing algorithmic or emotionally driven products influencing every aspect of our lives today. About the Author John C. Havens is Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems as well as the author of, Heartificial Intelligence: Embracing our Humanity to Maximize Machines. John will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. The session, Ethically-Aligned Design: Setting Standards for AI, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org You can follow him @johnchavens on twitter.

    By IEEE Standards Association Read More