OPW - July 9 - Here's the video from the IDEA Webinar on June 24th. RealMe and Spectrum Labs executives explain three use-cases for how their new combined service, CheckPlus, improves your Internet dating service.
According to the U.S. Federal Trade Commission, romance scams were responsible for over $200 million in losses during 2019, up 19% from 2018. This unfortunate trend is showing no signs of slowing down, highlighting the critical need for increased vigilance from online dating service operators.
RealMe and Spectrum Labs have partnered to develop CheckMe, a comprehensive fraud-prevention service to increase safety and security for users. Neil Davis, Chief Business Officer of RealMe, and Lee Davis, Vice President of Spectrum Labs, discuss the application of CheckMe using three case studies in the latest IDEA webinar, hosted by Mark Brooks.
Mark Brooks: Welcome to another IDEA webinar. I'm Mark Brooks, the president of IDEA, the Internet Dating Excellence Association, and today were covering RealMe and Spectrum Labs’ new product. It's the IDEA webinar about safety and reputation. So, as we know, as people increasingly turn to dating platforms to make connections, the stakes are higher than ever for making those connections safe and positive. Dating platforms must continuously invest in safety in order to stay ahead of those with harmful intentions. We deal with multiple forms of abuse and we rely on multiple lines of defense. Of course, having the best tech is key.
So, in this IDEA webinar, we have Neil Davis who is the Chief Business Officer of RealMe and Lee Davis, who's the VP at Spectrum Labs, here to share some interesting use case stories to illustrate how dating app operators can use CheckMe. It is RealMe and Spectrum Labs’ brand-new combined service which will raise your safety game further and realize better operational efficiencies. So, Neil, Lee, thanks for taking the time with us. We're keen to see what you have installed for us. Neil, would you like to start us off?
Neil Davis: Thanks, Mark. By the way, thank you for hosting us. We are grateful for the opportunity to connect with you and our audience here, and to be partners with Spectrum Labs. Maybe we should just dig right into the first story, which we're calling “Bots Make Bad Dates.” So, I'm going to call our partner here Samantha. And Samantha joined Acme Dating after seeing that some of her friends had successfully found dates and maybe even their life partners through dating apps. So, she went through the process. She filled out her profile, found a couple of more recent flattering and yet accurate photos and, put herself out there.
Quite quickly, she was excited when some guys started reaching out to her via Acme Dating, but it was a little bit weird. She noticed something. Sometimes the people she matched with said something that didn't make sense. It didn't flow in the conversation. It seemed like either a non-sequitur or it seemed like it was responding to something that she hadn't requested.
But she gave him the benefit of the doubt. Maybe they were nervous. Maybe they sent her something that wasn't meant for her. You know, we've certainly all done that before. So, she continued to message them, but it continued to happen, and it seemed too much. She certainly felt that something was off.
Stephanie went and she contacted Acme customer service which assured her that everything was fine. I mean, they couldn't do anything else. And so, they said, well, we have an issue with bots. And so, if you don't have a lot of experience in online dating, you just must get a little bit savvier about understanding what you're reading and what you're not reading, and you'll do fine.
So, Stephanie, who was excited and then really disenchanted. Then shared her frustration with some of her friends who also have been having the same problems on some of the apps, the dating apps that they were on. So, she then canceled her membership because it was an unsatisfying experience. A couple of her friends then canceled their memberships because they also had unsatisfying experiences.
Then bring it over to Acme, and let me introduce Steve, who is now at Acme with a mandate to improve customer experience. So, through the customer experience and the customer service logs, he learned of Stephanie's experience and then his rep shared with him that Acme was suffering from an increased bot issue.
Like some of our audience, maybe have. Steve knew from experience that it could completely unravel the fabric of a dating community and destroy a brand's reputation. I mean, it's all about trust, safety and security. Trying to increase engagement, trying to decrease churn. And all the bot problem was doing was increase everything that was supposed to be decreased.
So basically, Steve said, okay, we need to look at partners that could increase that could amp up our trust and safety game. What Steve did is he reached out to RealMe and Spectrum Labs. So, what we do and what we did in this case for Acme and Steve is, at first, we scanned Acme’s entire database of profiles and all Acme had to give us was first name, last name, birth date and month, and zip code. We did the rest.
We started, and, in a matter of days, we scanned Acme’s entire dating database and provided them a couple of different things. We matched all their users that we could match with the assumption being that the users that we could not match-because we have over 300 million profiles in the United States-had something wrong with the data input. Generally, as most of the audience knows, it has to do with age. It generally has to do with birth year rather than birth month. And so, what we did is we verified his user base. We gave him a list of the people who couldn't be verified.
Then all the ones that verified, we then provided what we provide as a flag status. So, we allowed Acme to understand which of their users have issues in their past. Whether it be severe criminal issues, sex offender issues, or, through our partners in Spectrum, concepts of toxic behavior. We also showed Steve the folks that have issues, but they're not severe.
Basically, RealMe is a comprehensive reputation platform for all U.S. adults. What we do in the dating space is we try to help dating app providers and owners create more trust and an experience with a lot more safety, security, and integrity. And so, what we did for Acme is we allow them to see which and how many of their folks were real, versus those that have some issues in their dating profile. We identify those with flags that could be concerning to other Acme dating members.
What they did is they would then mitigate potential harm. They would could segregate those users with real issues off to a side and allow them to interact within themselves, but not with the general public. This prevents them from coming back under a different name. Alternatively, they could decide to terminate the real harmful users' accounts.
This unique partnership between RealMe and Spectrum Labs allows Acme to bridge the gap between what we provide, which is basically in-app issues and then off-app issues. And so, the relationship really provides a powerful tool to help these dating providers understand their members and threats to that love that they're trying to create through their service offering. I'm going to throw it to Lee for a second too, so she can explain what Spectrum's role in this solution is.
Lee Davis: So, the first thing we helped Steve with was deciding what behaviors he wanted more visibility into and, ultimately, power to respond to. And that's really what we do at Spectrum Labs. We help folks at dating apps understand what's going on in their communities. Our customers choose from a library of behaviors, harmful behaviors, that are trained from our extensive data vault. And then the models are fine tuned to the community's expectations because all communities are different and continuing to be trained. And with that information, coupled with what RealMe is giving out, our customers can make much more informed choices about how to manage the community.
So, back to Steve. Bots are on the forefront of his mind, and we also decided that it was a good time to think about how we can help him identify threats. So, we take those behaviors, like I mentioned, and we fine tune them to the uniqueness of Acme Dating's community. Many people have just different levels of acceptance or definitions of these behaviors, and we give them the power to customize those things.
And then once you have those kinds of models fine-tuned and pumping in the background, we began to use our automation platform and that's called Guardian. And with Guardian, you can take those signals that we're producing saying, yes, this is harassment, yes, this is a threat, and yes, this is a bot. And what he did was ended up building information off it or building actions off it. And we call those automations.
In some cases, he would fully automate a response. And that's really the key to really scaling moderation and scaling trust and safety efforts. If you have ultimate confidence in those signals that we're delivering to you, both RealMe and Spectrum, you can confidently audit those things, but you'll most likely, never automate the whole kit and caboodle.
So, certainly, Steve configured Guardian to deliver more sensitive cases to the customer service team for review. So, once all this is up and running, RealMe and Spectrum are pumping in the background. Everything's going well, and what our customers are finding is that they're able to reduce the load on customer service issues.
And when you think about every case being $6 a case, it kind of really adds up when you have a global platform and you're onboarding hundreds of thousands of members to your dating experience. That wraps up this one. I'd like to toss it back to you, Mark.
Mark Brooks: I want to thank you so much, Neil and Lee. I do like this format where you're going through a use case story. It really drives it home. So, shall we move on to the next story, Lee? You want to.
Lee Davis: Let's do it. I think it's funny, and hopefully everyone will too. This one we titled, "Are You Smarter Than a Fifth Grader?" So many years ago, in the U.S., there was a very popular television game show called "Are You Smarter Than a Fifth Grader?" Maybe most of this audience missed this glorious entrance into television history. The premise of this video game is this: adult contestants were asked questions lifted directly from elementary school quizzes. If the contestant got an answer wrong, or just decided to kind of bail out of the competition, for whatever reason, the contestant had to say, man, I'm not smarter than a fifth grader.
I found this interesting; only two people have ever actually won this competition and they were both teachers. So, the point really that I'm trying to make here is that no one's smarter than a fifth grader because fifth graders are living and breathing that subject matter every day.
And so, with that logic in mind, let's talk about romance. You're not smarter than a romance gamer. And why? It's because, like those fifth graders, they're living and breathing this stuff every day. And they're very good at it. Let's revisit some of these known techniques. They'll design a fictional profile and they'll select a better than average looking photo. Not movie star good looks, but still a little slightly better than average. I think a movie star good looks might toss some flags there.
Here's what I found interesting. They'll say that they're a nurse or a doctor or they'll have a profession that's more service oriented. Because what they're trying to do is just piggyback your stance or your inherent trust in those types of positions, right. They're calculating. And then they'll begin to build rapport with you. So, complementing, flirting, and supporting, and they'll display all of the behaviors that you want to see in a potential match.
And then they'll nonchalantly ask you for some money, small amounts. You'll do it because you think you're helping the person that you're developing feelings for. And that's the sad part, right? Because they're not only doing it with you, they're doing it with a hundred other people on the platform at the same time.
It's a scary thought when you think about it in that way. In short orders, they can collect hundreds of thousands of dollars from your membership. Right? So last year, The FTC reported romance scam losses of over $200 million. That's an amazing amount. It's the most. It's the greatest, greater than any other scam reported to the FTC. And that's 19% more than what was found in 2018. So, it's a real issue, right?
And it is certainly with romance cameras. Again, the point here is that they live and breathe this. It's their full-time job, and they're at it to make money off your members. So, dating apps turn to us-to Spectrum Labs and to RealMe-to protect their membership against scammers. Neil, I’m tossing it over to you to talk about how you guys manage that.
Neil Davis: Thanks. You know, my wife tells me I have a tremendous amount of hubris. But even I don't have enough hubris to think that I can outsmart these romance scammers. Because this is what they do, this is all they do. And the problem is getting worse. There was an article this morning in a UK newspaper that said that in the month of April, there were 10,000 reported. So, God knows how many cases go unreported to a central reporting agency of sexploitation issues online. This has become a big issue during the Coronavirus pandemic. So, what we've done in a partnership with Spectrum Labs, we created basically a product called Check and it identifies users with criminal backgrounds, right?
So, we've got profiles for virtually every adult in the United States, which shows things like criminal and arrest records, financial information, property information, lawsuits, liens, bankruptcies, and employment. We're even bringing in gig economy worker scores. So, for the hundreds of millions of gig economy workers, that can factor into that overall profile.
We can identify off-app patterns of toxic behavior and bad actors. We identify the people who misrepresent themselves with the information, and it could be as simple as, you know, I'm five years older than I'm showing. Or it could be as simple as I really don't live in Beverly Hills. I really live 150 miles outside of Beverly Hills. We identify all that stuff to our dating partners. And we basically enable them, and, more importantly, their members, to make a more informed decision before they send money to someone; before they give personal information to someone; before they express any vulnerabilities to someone.
So, this exclusive relationship helps basically identify user IDs with concerning behavior and gives our dating partners a chance to get ahead of that. We want to protect those people who are the most vulnerable and actively looking and genuinely looking to find love. Lee, back to you for Spectrum’s solution in this issue.
Lee Davis: Scammers represent a very, very, very tricky problem because their behavior is supposed to be great. Right? That's not the issue. They do all the right things in this unique environment of dating apps. They say all the right things, they make you feel good. That's what their job is. Right? The real key that we've found to identifying scammers saying they get past the RealMe Check and they're in your environment is to examine context and do it at scale. So, our technology evaluates context by looking at multiple data points, the actual interaction that's happening, the context that surrounds that person, the two people interacting. What have they been doing outside of that interaction on the platform? What's their scoring? How have they been as, as members of the community. Do they have a good user reputation score or not so great user reputation score?
So, anyways, the key to getting an accurate idea of whether this person is a scammer or not is just the ability to evaluate contexts. And when you can accurately identify the behavior, then you can automate. And then we get into our sweet spot zone of automation. Automation is key with scammers. Now why in particular is it key with scammers? Well, it's because they operate at scale. It's not just one guy in some basement trying to rip you out of a hundred dollars. It's a room full of people. It's not one person who fleeced America of $200 million. It's dozens of them everywhere operating day and night.
So, scale and fully automated responses are what people need in order to identify these folks and then take responsible action. So, our customers who use us to help with this problem, take a suite or a combination of behaviors and, like I mentioned in our last story, we fine tune them to match what is appropriate for their communities. And then once we have that going, people use Guardian, that moderation software I was mentioning, to sort of build those automations so that their teams can scale. And then that unlocks a whole bunch of happiness in the form of efficiencies and cost savings and just love with it. And so that wraps it up. I'm going to go back to you, Mark.
Mark Brooks: Great. Thank you so far, so good. We've covered two stories, two use cases: bots and scammers. We've got a third story for you as promised. So, if you'd like to take away the third story.
Neil Davis: So, this one really is all about safety, about people feeling safe in online dating environments, and we're going to call it flirting or harassment, you decide. Something that some people are becoming significantly more aware of now, certainly than ever before, is flirting can look a lot like harassment and vice versa. So, if you think about it, flirting is a combination of flattery and teasing, and lightly veiled threats. You know, I'll show you what I can do and innuendo and double entendre, and usually contains charged words and some level of sexual content.
The other thing is it's about being persistent. So, it's complex and sometimes it's nuanced, sometimes it's in your face. But that's kind of what a flirting environment is, not think about harassment. So, harassment is also persistent. It can be sexual. It can be a little bit teasing. It certainly can have some level of threats, but there is a pretty dramatic difference in terms of how safe I feel and how secure and how trusting I feel as someone who may be being flirted to consensually versus someone who's being harassed.
So, let's go back to Acme Dating. Fred, who works at Acme Dating, became concerned about the whole flirting versus harassment issue after a female friend shared an experience on a dating app. So, she felt harassed by someone that she was matched with, but she found it difficult to explain the situation to customer service and get the right support. Because sometimes it's really a gut feeling rather than something that you can point to. So, Fred basically tried to help the process. He turned to RealMe and Spectrum Labs to run an experiment to see if something similar was happening with his platform as well. Lee, over to you.
Lee Davis: Okay, thank you. So, harassment is a really, tough behavior to identify, especially on a dating app where, like Neil says, the environment just charged. It’s different. Lots of feelings are involved. Tellingly, it's also one of the first behaviors our customers choose to kind of address once they began to work with us.
Identifying behavior-and I'm going to sound like a kind of a broken record here-is really all about being able to examine context. It’s context where you find out that something's wrong. How is the recipient responding? When are these exchanges taking place? How often? What's the history of the people involved? These are all the information that basic keyword lists, if you're using that to moderate, just can't even touch.
So, our dating app, Acme Dating, began to, in this scenario, use our harassment log. This is a story about how AI works over time. So, ours is an AI solution that learns as it goes. In this case, we're trying to see would this harassment model recognize this instance that the friends explained. What we do with that is called retraining. It's a very normal process that any person running an AI technology would be familiar with. We extract the whole data, and we hand label it based off this kind of new information of this being considered. This story, shared with Fred by his friend, it's kind of our new kind of North Star in saying that this is harassment, right?
And so, we'll use that to relabel this data, right. Then we put that data back into the harassment model and start it again. Then it's pumping on its merry way. It's going, it's doing its thing. Everything's fine. But what we want to do in this instance is change the model's behavior or the model's understanding of what sexual harassment is. Fred wants to fine tune what happens to the automations that are linked to this behavior. He probably has things like, well, if a new user is found to be harassing multiple people within this defined time frame, these things happen. And they're a combination of automated things like an email warning, and then a combination of a human touch moments like customer service, checking in and figuring out what's going on.
When you retrain, that's also an opportunity to redirect or refine tune some of your automations. So, what we have is the best practice is that we send a little bit more back to customer service to make sure that the model is picking up a good amount, or the appropriate amount, and the appropriate problems, let’s say. And then once that's taken care of, then the old automations get rolled back into the fold and we're off and running at the gates. Then we can achieve those new levels of efficiencies that we want and drive down the cost of customer service. Okay. So that's it in a nutshell. Back to you, Neil.
Neil Davis: I understand that Acme dating has a perspective and has an aperture that looks at what happens and what goes on on their site, that they know of. They have no idea what's going on with some of their users outside of the Acme environment. So, the relationship between, and the partnership between, Spectrum Labs and RealMe is an opportunity to provide Acme and all dating partners with a holistic view between what happens in-app and off-app. Whether it's toxic behavior or whether it's questionable background.
All that can include harassment as well. Right? RealMe has over 325 million profiles in the United States. And what we then do is we provide that to Acme and can show them who are the real users that are on their site by matching their profiles to ours. And so, a real user has a better opportunity to be engaged with on the site. It will enhance engagement. It will decrease churn, because people will feel that their experience has got a better outcome. Right? And so, this Check Plus product that we have with Spectrum Labs is really the first market opportunity that comprehensively allows a dating app to build more effective models to predict risk and analyze toxic behavior.
So, in app and off app, it really provides great personal insights, but combining them provides not only a reputation check, but an entire holistic look at a user and a profile and a complete data set. It will increase engagement, increase attachment, decreased churn, increased trust, increased security, and increased integrity. That's what we do at scale, Mark.
Mark Brooks: Great. Sounds like a marriage made in heaven. That's a good combination. We needed that more holistic view to get the job done. So, in closing, the issue of trust and safety is never really solved. It's a moving target for us all and a constant battle. Right? Using the best tools in combination can make it less of a headache and help protect more of our members. So, I really encourage you to take a look at this new product, and I'll be following up with you to connect you, and give you the opportunity to dig a bit deeper and essentially demo the product.