The moral panic over VR and privacy

Educause has a new 3-minute video looking at the future role of learning technology in US higher education. It's all quite jolly -- there's a nice encouraging soundtrack and an engagingly delivered narration. Many topics are addressed and virtual reality (VR) gets a brief mention with a head-mounted display (HMD) giving you, the student, first-person access to a clearing deep in the jungle where a brightly coloured bird lands on a tree stump.

But then in a shocking plot twist the bird is revealed to be a virtual drone with sensors recording student motion, attention and academic progress, data that is fed back continuously to an analytics engine. It turns out that you haven't been keeping up. The "bird" looks sad, sheds a few feathers and coughs. You made it SICK!

Fast forward three years and you're applying for a job as a conservation biologist. The interviewer executes a zoom gesture to adjust her personal augmented reality display and a furrow appears on her brow. "Thanks for sharing your transcript. I see your performance on Mondays was consistently lower than on other days of the week but your activity on social media was higher. Care to explain?"

Yes, I made up the last two paragraphs (and those that follow). The video does, however, allude sotto voce to the issues of privacy and autonomy in the era of big data. As systems become progressively more adaptive, there is also the question of where personal data resides (with the online adaptive textbook provider?) and how it is used and managed.

Where does VR fit in all this?

But if recent developments are to be believed, VR will eventually make this much, much worse.

The interviewer continues. "The biometric data collected by the headset shows your attention wandering all over the place and a significant lack of empathy with the plight of the bird. Comment?"

Where it could all go wrong

Over the past week or so there have been several posts and presentations evincing alarm over the direction VR may be taking. The video of Raph Koster's talk at GDC was the first to catch my eye but then there was Suzanne Leibrick's talk at SXSW and a Kent Bye podcast in which he interviews Jim Preston and collates some of his earlier thoughts on VR and privacy.

Raph's concern was that web and social media companies were constructing, unwittingly or in the name of "cool", VR and AR experiences that crossed the line from game to real life without considering or being able to manage the consequences.

Suzanne and Jim focus more on the collection of biometric data. While they acknowledge potential medical benefits, they are concerned at the potential of rich, subconscious datastreams to be used to predict and influence behaviour to a degree only imagined hitherto by advertisers, science fiction authors and, maybe, politicians.

These super-HMDs will presumably be used to adapt games to players dynamically, albeit initially at additional cost. While it is a little premature to panic, these presentations serve as a valuable wake-up call to the public, government and developers alike.

Where does this leave OpenSim?

All VR depends on presenting a visual rendering of the scene in front of the first or third person avatar camera. The position of the camera must therefore go to the server where it might potentially be logged (I'm not aware of any facility to do that at that at present in OpenSim but it's not my area of expertise). OpenSim also allows you to visualize the current focus of nearby avatars but it usefully provides an opt-out as well. By contrast, it is relatively simple to write scripts that log avatar location and, optionally, performance on tasks. Use of shared media inworld, e.g. web pages, can also be controlled via preferences to avoid unwitting disclosure of IP addresses.

Of course, OpenSim can also be used behind a firewall or even offline in single-user mode from a local install, ultimately even from a USB stick carring both server and viewer. Moreover, gaze in an environment free of head or eye tracking is not a definitive index of attention. An apparently preternatural interest in a particular location may simply indicate that the user is looking elsewhere at a web page or has gone to make a coffee.

Not perfect, perhaps, but likely as good as it gets.

End-point?

The interviewer's eyes flicker left, then right and she purses her lips. "Our AI predicts an 80% chance that you'd been partying too hard over the weekend, a 20% chance that you'd been looking after a sick relative. Thoughts?"

Meanwhile, in a data center on the other side of the world, the interviewer is being appraised by her own AI…

I'd like to think we could use OpenSim to model some of the issues raised by these concerns. The meta level analysis might be challenging to convey but getting students to develop and role play scenarios similar to the above might be one way forward.