Biozone OAR

biozone_001.jpg

The main components of this OAR date back to around 2008 in their Second Life incarnation. There are a number of objects relating to the bacterium Mycobacterium tuberculosis, most notably a giant circular genome that your avatar can walk round. Sadly the server which previously managed the genome touch events off-world is no longer running so error messages may be encountered and the gene markers shown in the picture no longer rez. Naturally, the gene annotation may also be somewhat dated. The genome is still, however, visually striking and could form the basis, for example, of a poster session about the organism and its genome.

Provided by Graham Mills_2

OAR

Single Region Universal Campus OAR

2017-03-18_22-04-49.png

Universal Campus is a signature build by Nebadon Izumi and used by Kitely as one of its optional OAR files loaded when a new world is created. The default build is a 2x2 var but this scale, while impressive, is not always necessary when working with smaller classes. The aim was to produce a version that would fit on a single region and contain less than 15000 prims, hence being compatible with a 10-avatar starter world. It was decided that the build would comprise the main hall and science lab in the first instance.

The build (Universal Campus SR or Single Region) is very much a work-in-progress but the current version can be downloaded for use under the same terms as the original, i.e. CC-BYSA 3.0.

Photospheres and panoramas

This article by Graham Mills_2 first appeared in the SOTK Newsletter dated 6th May 2016


Panoramas and photospheres are interesting places to start if you want to connect your build with real world scenes. Many smart phones now allow you to capture panoramas and photospheres that, while sometimes imperfect, may be good enough to set that scene (and are something students can make). You can convert both to cubemaps using an online utility provided by Sergey Gonchar. You get back a set of 6 images suitable for texturing the inner surfaces of a room-sized cube (sometimes referred to as a skybox) in Kitely.

You may need to play around a little with texture scaling and orientation but the results can be fairly impressive within the limitations intrinsic, say, to 360 degree video compared to VR. Graham found that photospheres give the best results and suitably licensed images for testing can be found via the usual suspects and elsewhere besides (for example, sphereshare.net has Terms of Service that encourage downloads). Graham had to add borders to the top and bottom of his panorama to stop content leaching into the distorted floor and ceiling textures (he used Irfanview to do this).

2017-03-18_17-33-07.png
Inworld view of the sculpture gallery. Left, front and right sides are shown in part or in their entirety (ceiling is due to seams between faces; no attempt made to optimise images before upload).

You can also make cubemaps from screenshots captured inworld. In this case you need to use a program such as Hugin to create and save a panorama in equi-rectangular format before creating the cubemap textures. Graham used Hugin (free) and found that a FOV of 45 degrees worked satisfactorily on the 12 overlapping images. Although he included coloured inworld markers as a guide, these are probably not essential. Again, a border was added and the image resized (reduced 50%) before creating the cubemap. This is for a single row of images; given more time it could be expanded vertically.

2017-03-18_17-34-02.png
Part of the OpenSim Micrographia build from a similar inworld perspective (no attempt to optimise inworld textures).

Panoramas and photospheres can also be viewed on the web using the Spherecast viewer provided by OpenSim educator Justin Reeve. This also has an option to generate a pseudo-3D image for viewing with Google Cardboard. Spherecast is going to offer additional features in the near future that are presently invite-only. Demonstrations can be seen on their website, including a sequence of narrated RL images that evoke memories of Justin’s wonderful Undersea Observatory OAR.

2017-03-18_17-37-32.png
The same panorama viewed using Spherecast in Cardboard mode.

If you are OK working with snippets of code, vrEmbed.org can be used to create simple interactive tours/stories for Cardboard using ordinary 2D web images and panoramas as well as 3D stereo images.

You can even incorporate virtual content in dae/collada format into photospheres using the Holobuilder application (web-based with free starter account but scenes can be adapted by others). Remember that you can export your own prim content in dae format from Firestorm via the leftclick menu. Simple animations and labels can be applied and scenes hyperlinked to form a tour. As with most of the above, you can display the scene in Kitely using media-on-a-prim although, of course, it will now be 2D.

Hopefully, the above has whetted your appetite. You can create mini-scenes in Kitely or RL and display them inworld, on your PC or your mobile device. Imagine a teleporter that displays your arrival point before you make the jump! Inevitably there can be rough edges; some of the Cardboard displays proved prone to flipping or failed to keep Graham’s phone alive. Some of that might be down to his use of niche Cyanogen OS kit. Nevertheless, there is much potential here both in and beyond OpenSim.

Addendum

If you are interested in an inworld 3D environment derived from Google Maps then RezMela Surroundview Lite is currently available as a preview version.

The moral panic over VR and privacy

Educause has a new 3-minute video looking at the future role of learning technology in US higher education. It's all quite jolly -- there's a nice encouraging soundtrack and an engagingly delivered narration. Many topics are addressed and virtual reality (VR) gets a brief mention with a head-mounted display (HMD) giving you, the student, first-person access to a clearing deep in the jungle where a brightly coloured bird lands on a tree stump.

But then in a shocking plot twist the bird is revealed to be a virtual drone with sensors recording student motion, attention and academic progress, data that is fed back continuously to an analytics engine. It turns out that you haven't been keeping up. The "bird" looks sad, sheds a few feathers and coughs. You made it SICK!

Fast forward three years and you're applying for a job as a conservation biologist. The interviewer executes a zoom gesture to adjust her personal augmented reality display and a furrow appears on her brow. "Thanks for sharing your transcript. I see your performance on Mondays was consistently lower than on other days of the week but your activity on social media was higher. Care to explain?"

Yes, I made up the last two paragraphs (and those that follow). The video does, however, allude sotto voce to the issues of privacy and autonomy in the era of big data. As systems become progressively more adaptive, there is also the question of where personal data resides (with the online adaptive textbook provider?) and how it is used and managed.

Where does VR fit in all this?

But if recent developments are to be believed, VR will eventually make this much, much worse.

The interviewer continues. "The biometric data collected by the headset shows your attention wandering all over the place and a significant lack of empathy with the plight of the bird. Comment?"

Where it could all go wrong

Over the past week or so there have been several posts and presentations evincing alarm over the direction VR may be taking. The video of Raph Koster's talk at GDC was the first to catch my eye but then there was Suzanne Leibrick's talk at SXSW and a Kent Bye podcast in which he interviews Jim Preston and collates some of his earlier thoughts on VR and privacy.

Raph's concern was that web and social media companies were constructing, unwittingly or in the name of "cool", VR and AR experiences that crossed the line from game to real life without considering or being able to manage the consequences.

Suzanne and Jim focus more on the collection of biometric data. While they acknowledge potential medical benefits, they are concerned at the potential of rich, subconscious datastreams to be used to predict and influence behaviour to a degree only imagined hitherto by advertisers, science fiction authors and, maybe, politicians.

These super-HMDs will presumably be used to adapt games to players dynamically, albeit initially at additional cost. While it is a little premature to panic, these presentations serve as a valuable wake-up call to the public, government and developers alike.

Where does this leave OpenSim?

All VR depends on presenting a visual rendering of the scene in front of the first or third person avatar camera. The position of the camera must therefore go to the server where it might potentially be logged (I'm not aware of any facility to do that at that at present in OpenSim but it's not my area of expertise). OpenSim also allows you to visualize the current focus of nearby avatars but it usefully provides an opt-out as well. By contrast, it is relatively simple to write scripts that log avatar location and, optionally, performance on tasks. Use of shared media inworld, e.g. web pages, can also be controlled via preferences to avoid unwitting disclosure of IP addresses.

Of course, OpenSim can also be used behind a firewall or even offline in single-user mode from a local install, ultimately even from a USB stick carring both server and viewer. Moreover, gaze in an environment free of head or eye tracking is not a definitive index of attention. An apparently preternatural interest in a particular location may simply indicate that the user is looking elsewhere at a web page or has gone to make a coffee.

Not perfect, perhaps, but likely as good as it gets.

End-point?

The interviewer's eyes flicker left, then right and she purses her lips. "Our AI predicts an 80% chance that you'd been partying too hard over the weekend, a 20% chance that you'd been looking after a sick relative. Thoughts?"

Meanwhile, in a data center on the other side of the world, the interviewer is being appraised by her own AI…

I'd like to think we could use OpenSim to model some of the issues raised by these concerns. The meta level analysis might be challenging to convey but getting students to develop and role play scenarios similar to the above might be one way forward.

Terrains

This article by Graham Mills_2 was originally published in SOTK Newsletter 28th March 2016 and makes extensive reference to Kitely-specific aspects of terrain.

Some caveats

Graham decided to learn a little more about OpenSim terrain by compiling this article from existing sources (gratefully acknowledged). It is not a comprehensive guide, more an overview. You are advised to backup any existing terrain before experimenting. Please share any tips or corrections on the Kitely forum.

Note that Kitely now handles terrain files in part outside the viewer. Downloads are initiated in the viewer, typically in the World>Region details>Terrain dialog, but completed via a web link provided almost immediately in a viewer message. Uploads are similarly made via the web using the same dialog used for OAR files in the world management page.

Although the basic region size is a 256x256 m square, Kitely provides a range of options up to 64-fold larger, i.e. 8x8 units, a roughly 2x2 km square. While Second Life(TM) region terrains are compatible with OpenSim, not all Second Life tools will necessarily work at this scale.

Aims

Creative use of terrain allows you to introduce features such as beaches, rivers and mountains all the while subtly guiding the movement of visitors and progressively revealing viewpoints, caves, waterfalls and plantings. A suitably rendered landscape can form the basis for a storybuild which develops as the avatar explores or alternatively divide a world into themed areas for particular activities. Of course, it is usually desirable to restrict the ability to terraform and this option is on the World>Region details>Region tab in Firestorm.

OpenSim terrains can be sourced from Kitely Market (under landscaping) or the web (SL terrains work as well), either as RAW textures or as part of OAR files. When uploaded the RAW files define the terrain height with complementary textures (e.g. sand, grass, rock, snow) automagically mixed in according to height. The height at which these TGA-format tiled textures take effect can be controlled via the viewer terrain dialog.

For those with the necessary skill and patience inworld editing can be used to model the terrain (the range of brushes is notionally extensible when working on a local server). However, features with overhangs such as caves cannot be modelled this way (the underying heightmap can only accommodate one elevation value per metre) and are best implemented by importing suitable mesh landforms, e.g. from Kitely Market (under landscaping) or Outworldz.com.

Terrain sources

  • Linda Kellie terrains
  • User Fritgern and others: http://opensimulator.org/wiki/User:Fritigern/SandBox
  • (Seanchai Mall will have links inworld to terrain files; Seanchai Library group membership normally required)

Editors and terrain file format

Another approach to creating terrains is to use an editor to create the RAW file which is a 13 channel texture in which the first three channels (values 0-255) are used to store height (red channel), a multiplier (blue) and the sea level (green; normally 20 m in Kitely and set in the client). Other channels are used to store parcel information etc. The multiplier is applied as channel 1 * multiplier / 128 so the maximum height is about 512 m. Note that very high mountains and steep slopes tend to look sub-optimal and imported mesh may be a better option. Again, Kitely Market has suitable products for cliffs and world surrounds.

While you can create, view and edit RAW files in various image and mesh editors, including PhotoShop and Blender, the less artistically inclined can use a program such as L3DT, Terragen or TerreSculptor to generate semi-random terrain for subsequent modification.

Downloads

  • L3DT standard: http://www.bundysoft.com/L3DT/downloads/standard.php
  • TerreSculptor: http://www.demenzunmedia.com/home/terresculptor/
  • Terragen 4 (no terrain export in free version): http://planetside.co.uk/free-downloads/terragen-4-free-download/

Tutorials and tips

  • L3DT tutorial: http://opensimulator.org/wiki/Using_L3DT
  • Vanish's tutorial on using an earlier version of Terragen: https://web.archive.org/web/20130403031215/http://opensim-creations.com/2010/06/05/howto-create-megaregion-terrain-raw-files-for-second-life-and-opensim
  • Bailiwick (for editing channels): http://www.spinmass.com/Software/BailiwickInstruct.aspx
  • Tips: http://opensimulator.org/wiki/Tips#Terrain_Tidbits
  • More tips: http://wiki.secondlife.com/wiki/TipsforCreatingHeightfieldsandDetailsonTerrainRAW_Files
  • Megaregion terrains: http://www.hypergridbusiness.com/2012/09/how-to-make-megaregionterrains/
  • Blender (see also SL Primstar-2): http://blog.nalates.net/2011/10/09/opensim-terrain-tutorialvia-blender-part-1/

Note that some of these tutorials are old and software versions may have changed such that the instructions no longer apply.

Resizing terrains

Kitely residents can use Kitely resizer to expand terrain to cover larger worlds (or vice versa) https://www.kitely.com/virtual-world-news/2015/05/02/resize-kitely-worlds-using-agraphical-tool/

A brief overview of L3DT

L3DT is relatively straightforward to use. A basic terrain can be created using either the designable map option or a Perlin noise algorithm. This can be edited in a 3D view and exported as a mesh, terragen, png or RAW file although the latter is not in a format suitable for uploading. Historically Bailiwick has been used in Windows to add the missing channel data OpenSim expects of a RAW file but the program is no longer maintained and is constrained to 256x256 m regions. The server-based Sim-on-a-Stick accepts png files but also defaults to a single region unless updated (see below). It will, however, load terragen format files.

2017-03-18_20-45-06.png
L3DT designable map imported into SoaS via console as terragen file

An example of a real-world terrain

Formerly the tutorial used Terrain.party as an example but the download function now appears to fail

A number of terrains of roughly 2 km square can be downloaded from https://houseprices.io/lab/lidar/map.

Sculpt previews

Although a somewhat outdated and inefficient technology, inworld renders of terrain can be readily generated using sculpted prims. Rightclicking the world map in the Cool VL viewer allows you to save the map texture and a sculpted prim texture in either plane or spherical format. For large regions/worlds you may have to split the sculpted prim texture in an image editor such as IrfanView and upload each 256 m unit texture in lossless format. http://sldev.free.fr/TerrainSculptor/index.html

Simple terrain sculpts can be made online: http://svc.sl.marvulous.co.uk/raw2sculpt-3.2/raw2sculpt.html

Terraforming in Sim-on-a-Stick Sim-on-a-Stick (also known as SoaS) is a somewhat venerable version of OpenSim (8.0 postfixes). Once installed it is run offline (indeed, even from a USB stick) and can be a good choice for schools not wishing to use public grids. It can also be upgraded by following the instructions on the website and this will currently be required for full varregion support. To expand the default 256x256 m region, edit RegionConfig.ini in the bin/Regions folder by adding SizeX = 512 SizeY = 512 to the end of the first region definition (whose lines are not preceded by a semi-colon).

When the update has finished, first run MOWSE in the SoaS root folder and then opensim.exe from the diva-r25084/bin folder. There is a notecard explaining avatar login. Wait for the text-based console to load. Help is available by typing help all or, for our purposes, help terrain. Useful commands include:

  • terrain fill 21 (levels terrain at 1 m above default sea level)
  • terrain load fname (where fname is the name of a RAW, png, terragen or other format file in the bin folder; avoid spaces in fname or enclose in double quotes)
  • terrain save fname (save in RAW format for subsequent import into Kitely via the client; alternatively import as OAR)
  • terrain elevate 1.0 (raise terrain by 1 m)
  • terrain multiply 1.5 (multiply all height values by 1.5)
  • terrain effect ChannelDigger (creates 8x8 grid)
  • terrain newbrushes true (modifies final three brushes on inworld dialog to support erosion; useful for smoothing jagged contours; set to false to revert)

Note that SoaS also has commands to rotate, scale and translate all the objects in a sim (the scene). See help Objects.

  • http://simonastick.com/
  • Use in primary schools: https://sites.google.com/site/virtualworldsprimary/simonastick
  • Running SoaS on a Mac (by Dot): https://www.kitely.com/forums/viewtopic.php?f=10&t=2997&p=17749

Maperitive To make a low resolution mesh terrain of a real location together with an associated map texture:

  1. download and install Maperitive (PC only) from http://maperitive.net/
  2. locate the place you want to map (make it region-size or thereabouts, i.e. not countryregion or country size)
  3. choose Tools>Export To 3D and save the COLLADA/dae file
  4. import to Sketchup (formerly Google, now Trimble -- I used v8; you will get errors - just ignore them) and export in the same format
  5. upload the file to OpenSim. The mesh should appear in your inventory when upload is complete; you just need to rez it. The mesh will be large by default and most likely requires some expertise in order to cam and move it into position. The low-res textures are cc-by-sa OpenStreetMap Project.

Flattener This is one example of a scripted tool for inworld terraforming. It's especially useful when you want to flatten a small area at a specific height. https://www.kitely.com/market/product/3835590/Land-Flatterner

Bulldozer Terrain Tool A more sphisticated tool that allows you to create sloped terrains. https://www.kitely.com/market/product/14006089/Bulldozer-Terrain-Tool

MAYA and what it means for OpenSim

This post started out as a comment on the MAYA (Most Advanced Yet Acceptable) design concept but this somehow got conflated into a ramble on the place of OpenSim in the world of educational VR. My apologies…

The context: how we got here

It seems to be that every 10 years or so virtual reality (VR) hits the headlines and then subsequently fades from view during a so-called virtual world winter. In 2007 it was the hype surrounding Second Life (SL) that spawned many imitators, including OpenSim which is open source but to a degree content-compatible with the commercial SL. We talked of immersive environments then but were largely focused on the desktop display.

In 2017, however, it is Head-Mounted Displays (HMDs) that are in the limelight, inspired by the 2015 acquisition of Kickstarter project Oculus by Facebook for $2bn. Hot on the heels of the Oculus Rift we have the likes (in no particular order) of Google Daydream, HTC Vive, Samsung Gear VR, the PSVR and whatever Microsoft delivers in terms of mixed and augmented reality. Never before have so many Big Players been engaged but even so there is pragmatism among the industry hype. Gabe Newell of Valve is "comfortable" with VR being a "complete failure" (though, of course, he expects the opposite).

The advent of the Big Players, of course, is part of the problem. Everyone wants a piece of the action if only through Fear of Missing Out. The Big Players have deep pockets and get most attention but are constrained by their very scale, running the risk of being blind-sided by smaller, more nimble competitors as the technology evolves and fresh thinking emerges. Beyond the devs and early adopters, the consumer is irked by high ticket prices, variable quality content and the suspicion that technology churn is going to lead to an evolutionary deadend and buyer's remorse. A few recall Google's 6-month excursion into desktop VR in the form of Google Lively back in 2008.

Standards such as OpenXR may afford some much-needed interoperability in due course but these are early days.

The perspective: looking to future through successful design concepts of the past

Of course, everyone is sure that what will deliver a return on their investment is "the experience" but nobody is quite sure what that will be. Games? Education? Virtual tourism? Beyond games and apps, will we see packaged experiences as in Sansar or something more like a virtual world? Will Facebook Rooms dominate social VR by default?

It was noticeable that senior figures interviewed at the recent GDC were reluctant to identify the "killer app" of VR, one preferring to talk pragmatically (that word again) of multiple domain-specific "hero apps" instead.

The three principles

An article on industry website Upload VR attempts to bring some order to this uncertainty by inviting folk to look back at design principles that have proven useful in the past.

The first, The Message of the Medium, reminds us that we are still at an early stage and may not be using VR appropriately. Moreover, the principle also encompasses a "known unknown", the likelihood that we won't know what any new technology is good for until we've advanced beyond the preconceptions inherited from its predecessor.

In this regard the most awesome feature of virtual worlds is also their primary weakness, namely that almost anything can be developed there (and it's implicitly multi-user) but that more often than not there is a simpler way to achieve the same end-result. Indeed, our affinity for phones and apps depends on the work of super-smart folk reducing the friction involved in one particular task to levels of comfort amenable to mass adoption. This is a tough "ask" for amateur developers in virtual worlds although the phenomenon of Minecraft encourages us to see the user as the developer and a highly engaged one at that.

The second principle, MAYA (Most Advanced Yet Acceptable), was formulated by Raymond Loewy:

Our desire is naturally to give the buying public the most advanced product that research can develop and technology can produce. Unfortunately, it has been proved time and time again that such a product does not always sell well. […] The adult public’s taste is not necessarily ready to accept the logical solutions to their requirements if the solution implies too vast a departure from what they have been conditioned into accepting as the norm.

So what most people are comfortable with is significant but incremental progress rather than an apparently unconsidered leap into a technology they didn't know they needed (Apple might be an exception). You can make a boring product more acceptable by giving it a modest degree of novelty (a shiny surface) but may have to tone down features if you seek mass-market appeal from a revolutionary product.

The third and final principle is Create comfort, then move on, i.e. work from a basis of familiarity using an interface that mimics reality. Perhaps the educational experience needs to start from the familiar, the campus library, the laboratory, the lecture theatre, even if it subsequently branches away once the students have found their "wings". This approach of cloning the campus was derided (a lot) back in 2007.

What does this mean for VR and for OpenSim?

The new developments in VR based on improbable levels of investment encourage us to view OpenSim through the lens of a deficit model ("what it can't do") rather than focusing on what we can use its very rich feature set to achieve or, at worst, prototype.

The deficit model is a siren call to add new features that will "make the difference" and lead to mass-adoption. If the principles above have value, however, they suggest that we need instead to simplify the experience, not for the benefit of the devs and early adopters, but for the average user, both in terms of the viewer and the inworld build. The Outworldz DreamWorld installer points in this direction with the (potentially configurable) OnLook viewer and its own library of OAR-based experiences. An educational edition would be a very interesting development (there is some educational content already).

However, we need always to ask why we are using OpenSim and to be open to novel possibilities that emerge as we work with a sim. For example, if I'm interested in learning about the circular economy, why not script multi-component objects that provide a service (a simplified self-driving car, perhaps) but remain the property of the supplier, that can be shared, that age, can be refurbished, and have their components recycled and cascaded? Then get students to participate in the economy, both as creators and consumers.

In short, make things simple (enough) and make them for a reason.

So let's be positive

In conclusion, the OpenSim ecosystem is wonderfully rich and diverse yet for the most part coherent in a way that standards bodies will find hard to impose on Big Players. My belief is that its potential remains largely unexplored or at least underexploited. Encouragingly, its use is not predicated on significant (and recurrent) investment in new kit. It can, however, be used with phone-based Google Cardboard HMDs via the Lumiya viewer should that be a requirement1. It is, of course, open source and extensible.

Let's not pretend all is perfect with OpenSim and that it can replicate the totality of the fascinating work currently underway in HMD-mediated VR. On the other hand the marginal benefits of the HMD approach need to be balanced against cost at scale. This argues, perhaps, in favour of important but niche applications for HMDs rather than general adoption. The consequence may be loss of coherence as multiple diverse acquisitions are made across large institutions.

Meanwhile, back in the real world there are things OpenSim educators can do to help themselves. While the community sometimes bemoans the lack of core developers, what is more crucial as far as educators are concerned is a network to support the development and distribution of innovative educational applications. Kay McLennan's recent establishment of an Educator Commons is a valuable step in the right direction and hopefully more will follow.

My personal belief is that the key to the future of educational use of OpenSim lies in getting beyond the deficit mindset and making the most of the current potential. If we do that, the future will be more likely to look after itself.

Early session at the OpenSim Community Conference 2016
Early session at the OpenSim Community Conference 2016


  1. There is also the CtrlAlt viewer for the Rift though its longevity is not assured.