Authors of studies like this are well aware of this. I haven't read this paper, but these papers often go to great pains to explain how they controlled or screened-out other causes.
I think it's fair to be off-hand sceptical without digging into the details of the paper. Since we don't really understand depression, it seems fairly obvious that it would be nigh on impossible to control for causes we don't know.
While “correlation is not causation” is a tired refrain, I agree that default skepticism is the generally appropriate response. There are peer reviewed scientific studies showing essentially everything and anything you could want to believe. Relatively few hold up to scrutiny or repeat experiments.
There is undoubtedly an opposing study that shows increased screen time correlates to happiness.
Various possible con founders were discussed at length in the article. The obvious one, depression leading to social
Media use:
“Two followed people over time, with both studies finding that spending more time on social media led to unhappiness, while unhappiness did not lead to more social media use. A third randomly assigned participants to give up Facebook for a week versus continuing their usual use. Those who avoided Facebook reported feeling less depressed at the end of the week.”
It's mentioned in the article. They refer to 3 other studies - one was based on randomly selected people who had to give up fb and they felt less depressed as a result; the other two found out that spending more time on social media led to unhappiness, while unhappiness did not lead to more social media use.
huh? It could very well be that teens who are depressed (because of unrelated causes) prefer to spend time on social media, not hanging out with friends...
Strong, persistent correlation between A and B generally implies that it's one of three scenarios: A->B, B->A, or C->A + C->B.
In cases where it's not possible to do proper experiments to explicitly test A->B hypothesis, you'd say that correlation implies causation in the (not that rare) scenario where you have strong reasons to believe that B->A is not possible (e.g. the increase in teen suicides IMHO did not cause the increase in screen time) and you have thoroughly went through all the plausible confounders (C's) and have argumentation/evidence why that's not the case. In that case it's still not solid proof, but it's certainly a good implication and worth considering as likely true until further analysis or evidence.
Effects measured as part of a randomization procedure in a controlled experiment or, failing that, a causal inference model applied to observational data.
Many of the answers here are pointing to methods, but I take this question as more fundamental.
Epistemologically, causation is the intersection of three things:
1) Temporal precedence
2) Covariance (i.e. correlation)
3) Absence of likely alternative explanations
Number 3 is by far the trickiest. Our inability to definitively rule out all possible alternatives means one has to resort to inference where causation is concerned.
Randomised intervention - i.e. randomly selecting a group of people to change their behaviour and comparing to the group that doesn’t change, who ideally receives a placebo. The random part is important, any other type of selection (e.g. observing people who choose to change their own behaviour) doesn’t work, in theory at least.
Don’t confuse studies with experiments, only the latter can prove causation (because you need random experiment and control groups), studies are for highlighting strong correlations by trying to take all the possible causes into account
“Correlation does not imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there.’” (Thanks to XKCD)
It’s true that correlation does not imply causation. But it’s a stupid middlebrow dismissal to throw out there.