tags: #publish
links: [[My Writing]], [[Software and Tech]], [[Law]]
created: 2021-10-23 Sat
---
# Moderation of Augmented Reality and Metaverse
So, online communities typically have a moderation and spam problem.
It is often amplified and encouraged by anonymity which enables bad behaviour without consequence, but it's also supported by just raw scale and there being a hyperbolically large surface area to hide in, which enables the same thing to a lesser degree.
What happens when you increase the scale to either:
- a multitude of VR virtual meta-worlds (public to some degree like Facebook/Reddit/Twitter, but not directly overlaid on the real world)
- or one or more AR metaverses overlaid on the physical world and correlated with action in it?
Limiting bad behaviour in the first case, VR, looks like moderation and anti-abuse of Facebook and Twitter and YouTube, except now it's harder because behaviour is now 3D and sort of analogue. It's harder to do automatically than automatically moderating video, much harder than automatically moderating images, way harder than automatically moderating text communication. It's likely near impossible to do manually at scale, if semi-public use of VR becomes widespread.
Requirements for limiting bad behaviour in the second case, large public AR environments, has those same problems but also some additional ones: it becomes more like policing in the real world, because it involves people's physical behaviour in places.
Does this mean it actually requires our regular police forces to police AR? Probably. It will also get tangled up in real-world laws about who is liable for actions on particular premises. How does property- and location-based law apply to activity in AR at that location?
Will they? Yes once it becomes sufficiently criminal, but probably in a reactive manner, not quickly enough to extend real-world norms there. But this will face major problems - you can't police AR behaviour from the real world, only from within the AR layer, because some of the actions and meanings only exist in the AR layer and can't be interpreted from the real world. But that's difficult, hits digital privacy and access control problems that don't affect real-world policing, and hits major scale problems if there are multiple parallel AR environments or if AR allows more interaction to happen, faster. It all multiplies the surface area we need to police.
There are also qualitatively different locality problems. VR, and especially AR, have a geographical, proximity and spacial locality structure that is different from most other internet public spaces. Unless you can police it everywhere at once (massive surface area), then there will be "good" and "bad" areas (sort of crime hotspots) affecting the reputation of the whole endeavour. This is already visible in things like Second Life, where there are places where jerks tend to hang out to mess with newbies.
So, we are more likely to end up depending on commercial structures like Facebook "policing" and "moderating" it, but they'll be following commercial motivations, not civic-good motivations.
By default, acceptable behaviours will be set by commercial interests, and degree of application of restrictions will be whatever suits the commercial interests, rather than equitable and legally enforced.
How will that evolve with legislation? It depends on the consequences and how they're perceived. But that too will be heavily skewed by the same commercial interests - even visibility of the consequences may be somewhat up to them...
Does decentralisation fix it, e.g. if the new VR/AR environment is a free-for-all not controlled by Facebook? Nope. That's a non-starter. A commercially moderated environment is the only way a large, public VR or AR space can get off the ground and become widely used at Facebook scale. It *requires* the limited, sanitised commercialness to be initially pleasant or useful enough for widespread adoption. Community policing will not be able to cope, not even to get started, due to the scale and locality problems. Zero policing will *definitely* not work - Parler's failure shows what happens if you set up an unrestricted public community with little moderation and with the initial pitch of "freedom of behaviour". The best we could expect with decentralised VR/AR is limited-audience restricted-membership spaces, at a small scale where where self-selection or eviction is enough to constrain behaviour - not public Facebook-scale spaces.
There will be multiple parallel commercial environments initially, until one wins - because there already seem to be significant difficulties in getting open standards for this stuff; everyone is pushing their own proprietary ones. This is an additional reason why a decentralised approach won't be viable: we won't get standardised enough to make it work well enough to compete with slicker commercial versions, and fragmentation will simply cause the best-funded to eventually win both the user-acquisition and policing war.
See also: Shakespeare! The Tempest Act 2 Scene 1, and Gonzalo's "commonwealth" utopia.