Apps like Bumble, Zoosk, and Badoo have used photo verification to verify that users are real. What do you think is the future of verification for dating apps?
That's an important question and one that's tough to answer. Forms of verification can obviously deter scammers but also give people an assurance the person they are looking at has been verified in same way they were…that he or she seems definitely to be for real.
I do not, however, run a service so cannot speak from personal experience bout user reactions when verification steps are offered or applied. Some have said previous attempts resulted in many real new users not signing up due to the complexity of some forms of verification.
Many operators clearly now believe attitudes have changed and that technology is on their side in terms of almost frictionless verification and there is obvious attraction in forms of photo self verification that reduces the need to collect other data. Its an approach that is in tune with a user base that joins and accesses services through smartphones etc.
That feels very different from any duty on operators to verify identity and age when few countries have national ID regimes or other simple alternative, particularly when there is no legal requirement, and when a data-based process could be a costly deterrent to folks using a service.
As a body based on a set of standards, good practice and guidance the ODA certainly needs to keep looking at where and how the sector feels we should go further in terms of identifying options for photo verification or other forms of self verification that can be trusted without setting "requirements".
Do you think online dating is considered social media or something different? Is it possible that it could be both?
A great question and one we will address at a workshop in November. I suspect the answer is "both". If beauty is in the eye of the beholder the definition of "social media" is in the gift of lawmakers at a national or Community level. Using slightly different language, it can be argued dating sites and others such as eBay host content and facilitate messaging on a peer to peer basis for one single purpose. That is different from the likes of Facebook, Instagram etc who distribute or "accelerate" content from third parties as well as individual users sharing their lives and views with their "network" of with the social media community at large.
Facebook's talk of offering dating services will clearly edge policymakers towards a view that only limited distinctions between forms of online publishing, hosting and accelerating content are possible. It is for us to ensure they understand the distinctions and explain why any approach to dating services has to be targeted and proportionate to the risks. Our services do not seem to create, publish and accelerate content for which providers could be made accountable.
What do you think is the greatest concern regarding online dating safety, and how you think regulation may be able to help?
It may be a concept regulation is not best placed to address. It is the distinction between a profile and a person. I do not refer just or particularly to fake profiles and risks of online fraud. I refer to the distinction between how we see and present ourselves online and when messaging and how we truly are as individuals. There is a famous Magritte panting of a pipe with the words "Ceci n'est pas un pipe". As I get it, it is saying we should not confuse a thing or person with an image of that thing or person. A picture of a pipe is not a pipe! That might sound heavy or pretentious, but it boils back to online services as an instrument that allows people to meet and get to know others. That is why a key ODA message we promote is "Get to know the person, not the profile".
Do you think an online dating service has an obligation to provide background checks on their members?
To say no, smacks of indifference. To say yes is to suggest operators have some absolute duty and some pretty absolute responsibilities as a result. This goes to the challenge we all face in balancing messages on what operators may do vis profile and other checks and monitoring of services with messages reminding users that they have a duty to themselves to remain alert to the possibility someone is not what he or she seems.
In Germany and the UK and in many other jurisdictions third parties cannot go to a Ministry of Justice or Disclosure and Barring Service to access information about previous criminal convictions. Exemptions exist, for example for those working in schools or with children in other environments, but they do not extend to the generality of employments or to web- trading, dating services or other activities. We need also to recognise that many of those who commit some offence are first-time offenders: there is no previous database or file flagging past behaviour.
We need, therefore, to look at what is practical in terms of prevention and deterrence at an operator level and a how that work is done and shared with users alongside some clear reminders of the need for users themselves to exercise care when messaging and care again when meeting someone in person.
One thing we have previously raised in the UK is the case for some ban on use of dating or social media services as a condition of sentencing or as a condition of parole and release from custody where the offender in question had previously set out to use these services specifically to cause harm to others.
What are your thoughts on businesses and users creating bots or fake profiles? Should it be illegal for bots to pretend to be human users?
Our guidance is that operators should not themselves create fake profiles to seem to populate a service or knowingly allow users or any other party to create and post fake profiles to attempt a fraud.
I have heard the argument that it might be ok for some services in the adult field to create profiles as part of an exercise in balancing up the gender mix or to spice up a service that is about fantasy, not dating. I am not persuaded. We need to think what the legitimate expectation is from a person paying for what she or he believes to be a service designed to stimulate encounters. Surely, it is not to chat to a bot. And I have a clear belief that national consumer protection agencies would take the same view.
Finally, for an activity to be illegal there would need to be a law outlawing that activity. That law would, presumably, wish to address any and all circumstances in which a bot might be created. That might happen in some omnibus legislation on digital safety, but it would require a whole lot of thinking through. I would certainly be wary of unintended consequences: there may be many sectors who create online personas precisely in order to identify and against wrongdoers.
Do you think that the data of private chats on dating services between users should belong to the users or the dating service?
I am no lawyer, but assume the law is the law is the law on this. Users have rights under GDPR and other legislation. That extends to access on the data held on them and rights to have data taken down and deleted, subject to a business' policies and duties to law enforcers and others.
With chats we have to recognise the chat is not between a user and the operator but a chat between two users facilitated by the operator. There would seem to be a limit to what can be done about messaging content that is also in the possession of the person you chat with.
What are the existing laws surrounding financial scammers on online dating services? How should these crimes be prosecuted? Should the dating service be involved?
This is another question that invites a lengthy reply that raises more questions that answers. I will resist that, at least in terms of length.
Are dating services "Hosts" under existing E Commerce law? This might prevent national governments from requiring that hosts "monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity." This Directive makes a clear distinction between illegal content (the worry with social media channels?) and illegal activity such as scamming.
Scamming is fraud. It is obtaining goods by deception. It is a criminal activity and prosecutable as such. That sits in parallel with but is quite different from consumer protection laws at the Community level that set out what is and is not unfair in terms of trading practices.
How should these crimes be prosecuted? I am inclined to say better! But that is unfair on national and regional law enforcement agencies. All kinds of internet scams exist. The UK National Fraud Intelligence Bureau (NFIB) lists about forty. A common feature is the ability of wrongdoers to run their scams for anywhere in the world. And, while there are some encouraging examples of successful international co-operation, there are many more stories of reports going un-investigated for this reason.
We are therefore, in a world of "best endeavours". In our case that might mean more real-time police sharing of reports of wrongdoing that allow operators to cut off any suspect behaviour. It means operators themselves looking at the mix of deterrent activity possible: profile checking or verification by various means, reporting tools and clear, timely and user-friendly advice and guidance on how to minimise exposure.
Somewhere in the mix sits financial institutions: "follow the money" or "block the payment" could make a huge difference and banks and international money bureaux should also be looking at what can be done.
As a sector we have to take this problem seriously. We cannot refer to the real social significance and value of services that might now account for more than 30% of new relationships without recognising this comes with responsibilities. Our aim is to ensure we can continue to provide this valued and valuable service by ensuring decisions on how the sector is regarded and regulated are informed, proportionate and practical.