Meta introduces ‘Personal boundaries’ feature to help combat harassments in VR
Meta has announced the addition of a “personal boundary” feature to its Horizon virtual reality experiences, with the goal of preventing harassment in virtual reality (VR). The new feature will be enabled by default In the Horizon Worlds creation platform and the Horizon Venues live event service. This feature generates an invisible virtual barrier around avatars which stops others from approaching too closely. However, you can still extend your arm to offer someone a fist bump or a high five. Each avatar will be enclosed in a bubble with a radius of two virtual feet, preventing them from approaching each other within four feet.
According to Meta, if someone tries to get too close, the system will block their journey. Previously, an avatar’s hands would vanish if it entered someone’s personal area. High fives and fist bumps will still be possible for avatars, but unwelcome contact should be much more difficult. In a blog post, Meta stated, “We feel personal boundaries are a compelling illustration of how VR has the ability to let people engage comfortably.”
The boundary system expands on an existing function that allows users to have their hands vanish if they get too close to another avatar. It creates the equivalent of four virtual feet between avatars, according to Meta. Users cannot disable their own limits, according to Meta spokesperson Kristina Milian, because the technology is designed to establish standard rules for how people interact in VR. Future updates, on the other hand, may allow users to choose the radius size. If someone tries to walk or teleport into your personal area, they will be stopped in their tracks. However, according to Milian, you may still travel past another avatar, so users can’t do things like block entrances with their bubbles.
Horizon Worlds was released to the public two months ago after a protracted period of beta testing. Meta’s adjustments are now being implemented. During that time, at least one beta user claimed that a stranger had molested her avatar. While the user described using the block feature to halt the harasser, Meta found that she hadn’t used all of her options and indicated a desire to make things like the block button “trivially easy and findable.”
Harassment in virtual reality existed even before Horizon Worlds, the firm formerly known as Facebook’s virtual meeting place and gaming hub, opened to the public in December. Harassment has been reported by users of previous Facebook VR experiences, including taunting, obscene gestures, and racial slurs. Recall that in December of 2021, there were claims of groping from women who claimed they had been sexually harassed by other VR users.
On December 1, Meta disclosed that a woman on Horizon Worlds had been assaulted by a stranger. The incident was revealed after she posted about it on Facebook’s Horizon Worlds beta testing group. The company claims that on the metaverse, where people often contact hordes of strangers daily, safety is still a top issue. When asked about the event by the media, Meta’s VP of Horizon, Vivek Sharma, remarked that the woman should have used the ‘Safe Zone’ option when she felt threatened. Sharma described the occurrence as “extremely sad,” but added that the case would help the company further fine-tune its blocking feature to make it “trivially easy and findable.”
A beta tester also said her avatar was molested by a stranger shortly before Meta launched Horizon Worlds to everyone aged 18 and older in the US and Canada in December.
In a memo to employees in March 2021, Andrew Bosworth, a Meta executive who became chief technology officer in 2022, said that controlling what individuals say and do in the metaverse “at any significant scale is nearly difficult.” A Meta spokeswoman, Kristina Milian, said the company was working on the metaverse with politicians, experts, and industry partners. Meta also announced in a November blog post that it would invest $50 million in global research to sustainably develop its goods. According to an internal memo, Meta has urged its staff to volunteer to test the metaverse.
Virtual reality misbehavior is notoriously difficult to manage because instances occur in real-time and are rarely documented. Titania Jordan, the chief parent officer at Bark, a company that utilizes artificial intelligence to keep an eye on children’s devices for safety concerns, expressed particular concern about what children might encounter in the metaverse.
Abusers could target children by sending them chats in a game or speaking to them over headsets, she claimed, both of which are difficult to track. Ms. Jordan explained, “V.R. is a whole other universe of complexity.” “The capacity to identify a shady character and block them indefinitely or have repercussions so they can’t easily get back on is still being developed.”
The good news about this recent feature by Meta is that users won’t be able to turn off the Personal Boundary feature, which builds on prior efforts Meta implemented to combat harassment, such as making an avatar’s hands vanish when they enter someone else’s personal space.
Enjoyed this post? Never miss out on future posts by following us»