Q: Tell us more about what you do.
A: I make sure that our clients' filters are working as well as they should be. I assess decisions that our automated moderation tool Implio has made. I take a sample of content that's been approved – items that have been filter-rejected and filter-approved – and identify if any mistakes were made. I then learn from these mistakes and make appropriate adjustments to the filter.
Q: What kind of things are you looking for?
A: Typically, we're looking for false positives in filters: terms that are correctly filtered according to the criteria set, but aren't actually prohibited. For example, firearms brand Beretta. Weapons are prohibited for sale online in some nations, but not in others. So, for many sites a filter rejecting firearms would make sense. However, there's another Italian brand called Beretta – but this company manufactures water heaters. So, lots of research is needed to ensure that a Beretta water heater parts ad isn't mistakenly rejected from an online marketplace.
Q: What's the overall effect of a 'bad' filter, then?
A: It depends. If the filter is set up to auto-reject matched words and phrases, it leads to a bad user experience as genuine ads might get rejected.
Q: Which rules are hardest to program into a filter?
A: Scam filters are the most complex to implement; mostly because scams evolve and because scammers are always trying to mimic genuine user behavior. To solve this, we monitor email addresses, price discrepancies, specific keywords, IP addresses, payment methods – among other things.
Q: What does it take to be a good filter manager?
A: You need to have a curious, analytical, and creative mind.
(FULL DISCLOSURE: Besedo is a client of Courtland Brooks.)