You've gotta love the PR bullshit of it all. Someone intentionally chose to write the headline as:
> TikTok: An update on recent content and account questions
And not something more honest like:
> TikTok: An apology about a content moderation mishap
This is done to imply they did nothing wrong at the first glance, and then concede they did something wrong but will do better later on in the article.
It is also convenient that they first draw attention to the author of the video being associated with:
> (1) Terrorism or terrorist imagery, (2) Child exploitation, (3) Spam or similar malicious content.
And then immediately after concede that the video was removed because they messed up:
> Due to a human moderation error, the viral video from November 23 was removed. [...] it should not have been removed.
If companies want to know why they can't regain trust and aren't given the benefit of the doubt, it's because they cannot be honest in situations like this. PR in the 21st century is about communicating how you have in fact messed up because you are humans, not trying to gloss over it and distract from it like some corporate overlord.
I don’t think this certainty about this being “PR bullshit” is warranted at all, and feels a bit like knee-jerk cynicism.
The actions as described in the press release seem to me both plausible and reasonable. Banning multiaccounts for severe infractions makes sense, and it also makes sense that this account could have been caught up in that.
As for the video takedown... you may underestimate the consistency of human moderators. They are often under intense time pressures and mistakes are made all the time. There is also the potential for individual moderators to apply their own bias to the report outside of company policy. With a 50 mins reversal of the takedown, that seems plausible.
If they are not outright lying, it does seem like the title is accurate and not at all misleading. It does sound like a moderation mishap.
> I don’t think this certainty about this being “PR bullshit” is warranted at all, and feels a bit like knee-jerk cynicism.
It's always warranted. It's Public Relations, not Public Information, and it's created exactly to convey the best public image that they can muster given the circumstances. Official statements like it should always be regarded with suspicion, because there is little incentive to explain the situation fairly when not doing so is likely to convey a better public image.
It's funny, because if we called it "propaganda" (which PR essentially is) and all agreed that it is, no one would accuse you of being cynical for not trusting it at face value.
That's probably true, they want to convey the best image they can. I feel like people are trying to insinuate something more sinister is going on, like they are lying.
>As for the video takedown... you may underestimate the consistency of human moderators. They are often under intense time pressures and mistakes are made all the time. There is also the potential for individual moderators to apply their own bias to the report outside of company policy. With a 50 mins reversal of the takedown, that seems plausible.
And how that does excuse them? If company cannot handle quality moderation with current workforce they should invest in either more qualified people, more people or tooling.
User, and customer to generalize, should not care about that - to them company/service is a black box that provides desired output for their input(content, money, personal information etc).
Whatever goes inside box is of no relevance.
I can get empathizing with employees - in fact everyone should do that, but empathizing with a company? Hell no.
You are never going to get perfect performance and correctness from any kind of moderation. You will always underfit and overfit the set of allowed content.
This kind of complaints are missing the point. Youtube allowed predatory copyright claims without any kind of burden of proof on the claimant (only recently they punished a copyright troll). If this was indeed a mistake of human moderation AND the platform is not plagued with them, then I don't understand what you are expecting of them; having so many moderators to have 0% error rate?
The video removed was not close to any of the prohibited categories, so it's misdirection to say you can't get to 0% error rate. The claimed "error" is not an edge case that well-intended moderators might make an honest mistake on. Falling back to statistics just obscures that.
As a counterclaim I would say that your comment is misdirection of how moderation works in a big platform.
I have no insider knowledge, but for example last week a big Youtube channel had many of its followers banned because they were spamming (solicited) emojis in chat.
Somewhere deep in the moderation stack that tripped an edge case of spamming/abuse/who-knows-what (not youtube apparently, they had no idea how this happened
I expect them to work under the assumption of "innocent until proven guilty", not vice versa(which is easier: just do a huge banwave, and restore few complaining people as others won't care to fight for it)
If you allow me a strawman, the only possible outcome of this is to only expect perfection from everyone, thus building a society based on hiding forever any and all your mistakes.
A satirical twitter account impersonating Satan once twitted "one sin or a thousand sin and you and in hell anyway. Why not go for a billion and come down like a hero?"
I am not saying your opinion is wrong, but it absolutely needs a concept of redemption for balance
Sure, I didn't mean that the company must be disbanded as soon as they make their first moderation error.
But perhaps they should pay a small fine for each account erroneously censored, so that they have a clear incentive to get better. (Who would levy that fine, in the context of multinational corporations, is a separate debate.)
I am more interested in who would design such fines. One of the great delusions we live in is that complex social problems have simple solutions.
As a practical example public train service in Germany was privatized some decades ago; obviously to guarantee quality the contract stipulate some quotas that need to be reached, one of these is that delays needs to be below a certain level; here comes a captcha, if a train is cancelled it does not count as a delay.
So the financial incentives are essentially to have little (but non zero) delays and cancel all trains with a above average delay.
You cannot game the system and expect the system is not going to game you.
Or better, are you going to be prepared for the consequence of a policy like the one you propose in practice?
> If a company doesn't achieve it or is not even making incremental progress towards it, then it should come under fire.
Tiktok AFAICR doesn't have the poor systems that say various Google products do for example. I'm not aware of a pattern of behavior on their front, just suspicion.
Tbh, compare that to the response of youtube takedowns and (rather poor) human moderation which are famously terrible. I'm not sure about Tik Tok but a human undoing a bad action in 50 min should be lauded. It's only because it's a politically charged subject people will see bias no matter which way they look at it.
> Banning multiaccounts for severe infractions makes sense, and it also makes sense that this account could have been caught up in that.
I am on the fence on this one.
The first ban was over satire. Tiktok recognizes not being open to satire, which is refreshingly straightforward in a way, yet still giving me pause for a social media. Especialy when it’s not the co tent just taken down but the whole account banned.
The “multi-account” part then comes from the girl using the same phone to create a second account after the first one got banned. It’s not an elaborate scheme to game the system and auto praise her videos, or an account farm or that kind of behavior. She would have had a single account if the first one was still alive.
All in all they make it seem like she did a lot of very unrully things when for most other platforms it would have been totally OK behavior.
In particular spending that much time to make her look controversial for satire and humanist speach is unwarranted.
If I recall, the video was not taken down. Her account was still active so people could see the video, but her access to the account was blocked for a month.
It’s ironic because TikTok was getting a lot of flak in the news recently because they were too slow to remove actual terrorist propaganda. So they probably went overboard and it resulted in this situation.
Want to add to this. Tiktok blatantly violates Google's device ID policy, and actually admits it in writing.
> We share your device ID with measurement companies so that we can link your activity on the Platform with your activity on other websites;
My contact at google said a year ago when I asked if they know about Alibaba's device ID abuse for advertising targeting, it's their policy to not to touch Chinese companies' device ID abuse.
The press release specifically states that the video was taken down: "November 27, 2019 @ 7:06am ET – Due to a human moderation error, the viral video from November 23 was removed."
News outlets should have a special feature after each holiday for the headlines dumped right before said holiday in an attempt to escape notice. Not only do they get coverage, but the deceptiveness of the timing is highlighted. I can see this being a good fit for news/entertainment like Stephen Colbert.
It's not just holidays, it's every Friday. It's called "taking out the trash." And since most broadcast newsrooms are on skeleton crews after 5pm on Friday, the bad news usually doesn't get reported until Sunday, if at all.
You forgot that they posted the news the same day that it happened was fixed. Should they have delayed the fix until after the weekend to avoid the bad optics?
No. You missed the point. TikTok is a Chinese company. They still need to do business in China. They can't give an impression that the Party was wrong.
Because this sort of thing should inform the expectations that customer holds of the business, of what they can entrust the business with and what better to take elsewhere, how to expect them to act now and in the future. Like, businesses are not actually black boxes in the same way that not all actors in any realistic marketplace ever have perfect knowledge. Useful abstractions, but only up to a point. You got any inside-ish info? Use it to your advantage if you can.
If you know they are subject to that kind of pressure, would you launch your "We stand with the Uighurs" campaign there? Maybe yes, maybe no, but you should consider that carefully. Or maybe you wouldn't want them to gather data on you at all, if you are, say, involved with the Hong Kong resistance or whatever. And maybe, knowing this, you would want to take your data and attention elsewhere just on principle.
Given what we know today, would you trust Facebook to not do any secret shady business with your data? Wouldn't you make that part of a decision where one outcome might give them a lot more data on you?
That's why such info is very much of relevance to customers. That's why they try so hard to control the spin on things like that, because it's harmful if their background gets too much attention.
Oh i have nothing against prior research, but when issue just happens - why would you care for a reason as a customer?
I do not trust any corporation.. well i do trust them - that they'll do everything they can to profit the most. That includes facebook, as i find no purpose to using social media.
I hear you, but we try hard to use representative language from the article [1] rather than making something new up. For example, to say "concedes" is already crossing into editorializing. On the other hand, bland corporate press release titles on mea culpa posts are misleading [2], which means that the site guidelines call for changing them: "Please use the original title, unless it is misleading or linkbait; don't editorialize." [3]
The solution is to search for the point where the article actually says what it's about—it has to eventually, or there's no culpa in the mea culpa. I looked through the article and think I found it, so have put that in the title above.
I should have thought of this earlier, since it's one of our standard tactics.
There is a better way to do it: you leave the incorrect reason out and focus on the mistake, and perhaps consider stating what you are doing to avoid making this mistake or others like it in the future.
> If companies want to know why they can't regain trust and aren't given the benefit of the doubt, it's because they cannot be honest in situations like this. PR in the 21st century is about communicating how you have in fact messed up because you are humans, not trying to gloss over it and distract from it like some corporate overlord.
Not sure what either of them are for sure, but from first look it seems like both are monitoring scripts (see https://sf16-muse-va.ibytedtos.com/obj/ttfe-maliva/slardar/p...)
, but will do a deep dive soon. Interesting they need this just for a simple press release.
Also notice the person posting is explicitly from "TikTok US" and not just "TikTok". Interesting decision.
Also if you root your Android phone, wireshark their app, trace the traffic, and check the Manifest you'll see they are sending user's personal identifiers (non-resettable hardware IDs) to servers in China. There's absolutely no way why uploading this data (or reading it in the first place) would be in any way necessary to provide the app's functionality.
And they dump/read files in shared storage spaces to cross-tracking with other Chinese apps. If you try to install it into Secure Folder on Samsung it gets disabled.
The best fit is probably the "Anti-fraud: Enforcing free content limits and detecting Sybil attacks" use case, since that is about preventing a single user from creating multiple accounts. (Although the content limit for a banned account is to stop it from producing more content, not consuming it.)
The official recommendation:
Use: Instance ID or GUID. On Android 8.0 (API level 26) and higher, SSAID is also an option, as it's scoped to the app-signing key.
Why this recommendation?
Using a GUID or Instance ID forces the user to reinstall the app in order to circumvent the content limits, which is a sufficient burden to deter most people. If this isn't sufficient protection, Android provides a DRM API, which can be used to limit access to content, includes a per-APK identifier, the Widevine ID.
So I guess best practice in this case means they should use DRM instead.
The article doesn't seem to say it's coincidental. The person previously made a video containing a picture of Osama bin Laden. There's probably a positive correlation between people who make videos containing pictures of Osama bin Laden and people who make videos critical of Chinese brutality. So if you ban everyone in the first group, you're likley to ban some people in the second group also.
I didn't say that at all. I never said it's ok to ban people who show a picture of Osama bin Laden, nor that evening news shows should be prevented from showing pictures of Osama bin Laden.
> They would have left it banned if they wanted to do what you're saying.
Nah, they were caught with their hand in the cookie jar.
For critical content outside China, they minimize it via censorship when possible. Once the public becomes aware of censorship, they stop censoring.
So in this case, there are probably still things being removed from TikTok that are critical of China, we just aren't aware of them because they aren't from this girl's account. As for this girl's account, they can probably artificially lower likes or something. The point of censorship is to do it without being caught.
> Once the public becomes aware of censorship, they stop censoring.
Not in my experience. The usual cycle on Weibo is that some sensitive news breaks, e.g. by eyewitnesses posting videos. This gathers attention until related hashtags start trending, at which point the Weibo censors notice it and remove the trending hashtags. Afterwards, you can still use a direct search to find posts about the event, but only if you already know about it. If you repeat the search, you'll notice search results disappearing or having their pictures removed. Some time later, the first post for searches about e.g. the time and location of the event will return some official statement, with the comment section disabled. Some other search terms will still turn up posts from the first wave of activity, but after a day or so those disappear as well.
If the event is very sensitive, searches for it get blocked completely, even if awareness is high. For example, one study [0] found that among a sample of Beijing students who neither use VPN nor have a roommate who does, 56% answered a quiz on the Panama Papers correctly. If they had a VPN-using roommate, this increased to 78%. Nonetheless, when I search for "巴拿马文件" on Weibo, I get told there are no search results. [1]
They typically arent removing, just making it not visible to a broad audience. In this case it became visible broadly, so this post is kind of the only choice they have to look somewhat independent. PR at its finest.
Of course that was not done at the behest of the US Government. Criticism of the US, US politicians and Government officials is rampant on Western social media. The US Government doesn't try to censor this.
Meanwhile, Chinese tech companies are beholden to the whims of China's CCP leaders. China's internet is completely controlled by the CCP.
There is zero moral equivalence between the US Government and China.
> Of course that was not done at the behest of the US Government.
How would we know that?
When ActivisionBlizzard, a US company, banned a player for trying to turn a private event into a political platform nobody wanted to see evidence for how "The CCP forced ActiBlizz to do that!", it was just accepted as established fact, even without the slightest bit of evidence for it.
> Criticism of the US, US politicians and Government officials is rampant on Western social media.
Just like a demand to moderate it even more [0]. Just like this kind of criticism can have very far-reaching consequences in the real world [1].
> The US Government doesn't try to censor this.
No, it just nudges US social media companies into doing it for them under the very real threat of having government regulation forced on them [2] As a German, I'm plenty familiar with this kind of "Not government-censorship censorship" that gives legal plausible deniability but still ends up forcing people into censoring their content if they want it to get any exposure.
> Meanwhile, Chinese tech companies are beholden to the whims of China's CCP leaders.
Unlike US companies? The third-party doctrine [3] is still a very real thing, heavy cooperation between US private companies and US intelligence services is still a very real thing . Just because people sometimes talk about PRISM, and pretty much only that [4], does not change the reality that these crimes are still going on to this day.
But that's okay because "We are allowed to talk about it!", somewhere besides the mainstream, every couple of months when Snowden manages to pierce a headline trough the attention glass ceiling, after that it's back to business as usual.
A reminder: Intel is subsidized by the US government, Intel's ME is pretty much exactly what Bloomsberg fictional Chinese super spy chip supposedly does [5]. The difference is: Intel ME is real [6], Bloomberg's chip has yet to be actually found, even tho there are supposedly tens of thousands of physical samples in the US.
This has very real consequences for companies outside the US [7]. Deflecting from that whole situation by going "We sometimes talk about PRISM", when nothing changes and no major political movements are pushing for change, is belittling the problem for the sake of projecting a completely fictional moral high ground.
Do you think it’s possible that the US government has enough control over public opinion/attention that they simply don’t need to explicitly censor in most cases? Who remembers PRISM? Did the government censor it? Or did we just conveniently forget?
Of course, that’s precisely my point. You can easily read about PRISM, and yet the government wasn’t held responsible and presumably didn’t change its actions, so it appears that censorship simply wasn’t necessary.
> Do you think it’s possible that the US government has enough control over public opinion/attention that they simply don’t need to explicitly censor in most cases?
No, not at all, and they absolutely do not.
> Who remembers PRISM? Did the government censor it? Or did we just conveniently forget?
I do, but sure, many don't. But that's mostly evidence that the market for grievances against the government is a competitive one, in the US (and the west in general) - you'll always be able to cherry pick some single issue (or several) that are serious, yet fall below the radar of the general populace.
In the west, we can curse our leader's names all day long, publicly with a megaphone. In the east, you can't mention Winnie the Pooh.
It doesn't even mention the possibility of political motivations. Nor does it mention the concept of freedom of speech. I think it says all we need to know about the lack of values from this company.
I can’t even imagine what kind of “values declaration” a Chinese company, under the scrutiny of the CCP, could possibly hope to make. Free speech? Transparency? Privacy protection? Honesty and integrity? Any one of these is clearly a lie given the country that they’re based in.
I don’t know what the excuse of those other tech companies outside China could be. I guess when you move fast and break things, some of the things being broken are promises.
> The People's Republic of China is a socialist state under the people's democratic dictatorship led by the working class and based on the alliance of workers and peasants.
> Article 2. Power belongs to the people
> All power in the People's Republic of China belongs to the people.
> Article 35. Freedom of speech, press, assembly
> Citizens of the People's Republic of China enjoy freedom of speech, of the press, of assembly, of association, of procession and of demonstration.
> Article 37. Freedom of person
> Unlawful detention or deprivation or restriction of citizens' freedom of the person by other means is prohibited, and unlawful search of the person of citizens is prohibited.
> Article 39. Inviolability of the home
The residences of citizens of the People's Republic of China are inviolable. Unlawful search of, or intrusion into, a citizen's residence is prohibited.
Is "unlawful" there translated from a word with similar connotations in Chinese? Because "unlawful [search, etc] is prohibited" is...a bit tautological.
>We will also be reviewing our policies to allow carve-outs for things like education and satire, as other platforms do.
But note that most platforms don't actually advocate full freedom of speech on their own platform. For example Hacker News has a lot of rules of what to post and how to behave.
Almost half the games I play are Chinese owned, so one more entertainment app from them is fine by me. I like browsing TikTok when I’m waiting for things.
I'm pretty sure WeChat is up to a ton of shady stuff that will probably eventually come out. Chinese government involvement combined with the level of permissions it asks for screams bad things to me.
I'm not sure, there was a ban screen that showed up, saying that I was using unofficial software to log into my account, and required a self-service unblock process to be completed, which included getting a number of WeChat users to verify/vouch for my account.
The specific reason given by WeChat was that I was using an Android emulator (such as Andy or Youwave) and was sent to this page[0]
If it's sandboxed then it can track the user less, they claim it's anti-spam but anti-spam can be done without such extreme measures. Same problem with spyware Snapchat.
This actually seems like a pretty reasonable and detailed response to me. The content was down for 50 minutes before their internal checks reinstated it. Probably a much better and more transparent response than any of the other socials would provide.
I chuckled at Hasan Minhaj’s (obviously at least semi-facetious) take on TikTok being a Chinese company during one of his stand-ups. The gist of it was, basically, what if all of TikTok’s user base was suddenly doing their usual viral moves, but wearing something that says “I stand with Hong Kong”, for example. That’d be bound to send CCP for a loop…
Anyone know why TikTok and Douyin are bifurcated domains of the same app? They want to separate their domestic and international data mining efforts to minimize pre-processing or something?
It's the only way to be a Chinese company and have a international audience. Douyin fully complies with Chinese regulators (i.e. censors) while TikTok is content managed separately.
It's the same reason why the wechat that exists in the Chinese internet is different from the international wechat.
so, is there any platform, that's both free, easy to use for non-techies and is just that, a platform, not a publisher? I'm fine with a company removing stuff when the law absolutely requires them to, but not in any other case.
I think the best way for America to deal with Tik Tok is to ban the app from the American App Store. This will allow current users to continue to use it + have their data collected while stymieing the growth of the app/cancer. India already restricted it due to child abuse, and Tik Tok leaders failed to show up at congressional hearings.
> TikTok user posted a video that included the image of Osama bin Laden, resulting in an account ban in line with TikTok's policies against content that includes imagery related to terrorist figures.
The chinese government has started to classify protestors of Hong Kong as terrorists. What good are these policies where the core definition of the law itself is left for interpretation by the Chinese government?
The whole enterprise of China and their principles are just... absurd. The elephant in the room is complete authoritarian control by a uncontestable leader, but let's bikeshed about company policies. What company policies, if they only rest on the stilts built by the CCP?
Yes, the CCP hit the nail on the head by putting over a million muslims in concentration camps. Oh and Falun Gong practitioners + Tibetan monks were simply sleeper agents.
Now, that I think of it - yes. You're right. I can't trust the Chinese government at all. To the CCP, anyone who criticizes the government is a potential terrorist.
What is Edward Snowden, then? Will he be greeted with open arms when returning to the USA, or immediately end up in a black site? He criticized the real government of the USA, not the sham you see on news media.
Edward Snowden is newsworthy because he is one person; he is the exception.
The CCP categorizing tens of thousands of its own citizens in Hong Kong as terrorists and a million plus in the western part of the country as potential terrorists isn't even remotely the same thing.
First, that number is simply incorrect. It is about 1/4 of that number.
Second, while America has a serious prison issue (the highest incarceration rate in the world? really??), the current state of affairs of African-Americans isn't really comparable to the treatment of Uighurs, Tibetans and Falun Gong practioners.
> > > What is Edward Snowden, then? Will he be greeted with open arms when returning to the USA, or immediately end up in a black site?
> > He also leaked a lot of documents that he wasn't supposed to.
> So did Daniel Ellsberg with the Pentagon papers.
Daniel Ellsberg wasn't a contractor at the NSA either, though he "was charged under the Espionage Act of 1917 along with other charges of theft and conspiracy, carrying a total maximum sentence of 115 years."[0]
I doubt Snowden will be so lucky as to share Ellsberg's fate of all charges being dismissed "[d]ue to governmental misconduct and illegal evidence-gathering"[0].
Ellsberg himself has specifically defended Snowden, saying Snowden "made the right call" in fleeing the country [0] and "would not get a fair trial" if he returned [1].
> Ellsberg himself has specifically defended Snowden, saying Snowden "made the right call" in fleeing the country [0] and "would not get a fair trial" if he returned [1].
Hence my assertion of:
> I doubt Snowden will be so lucky as to share Ellsberg's fate ...
Yes, I'm agreeing with you and providing additional substantiation as to why a reader should believe you that Snowden would be unlikely to share Ellsberg's fate.
US social media has deleted graphic pictures of US torture as "terrorist propaganda" before [0], even quite historical pictures are classified as "child pornography" [2]. For now, over 2 years YouTube has been heavily cracking down on "extremist content" by deleting conflict footage, also deleting evidence for war crimes in the process [3].
It's hard to even quantify the problem because most people are not even aware of it existing.
> TikTok: An update on recent content and account questions
And not something more honest like:
> TikTok: An apology about a content moderation mishap
This is done to imply they did nothing wrong at the first glance, and then concede they did something wrong but will do better later on in the article.
It is also convenient that they first draw attention to the author of the video being associated with:
> (1) Terrorism or terrorist imagery, (2) Child exploitation, (3) Spam or similar malicious content.
And then immediately after concede that the video was removed because they messed up:
> Due to a human moderation error, the viral video from November 23 was removed. [...] it should not have been removed.
If companies want to know why they can't regain trust and aren't given the benefit of the doubt, it's because they cannot be honest in situations like this. PR in the 21st century is about communicating how you have in fact messed up because you are humans, not trying to gloss over it and distract from it like some corporate overlord.