Lately I've been getting ads for depression medication etc. Jokes about me being depressed aside (I'm not, and no, Google's adtech does not know better than I do), targeting people who are in a specific state of mind is extremely unethical, and could have disastrous societal consequences for the largest social networks.
In light of that, I don't think machine learning should actively guide people toward content; let people ask for things specifically. I think the recommended videos section should JUST be videos related to the one you're currently watching, not videos you might be interested in. (Look at Pornhub. Pornhub is terrible now. I just get the same videos over and over again.) Machine learning is less and less used to categorize content and more to link content you might like to your previous watching patterns. That sounds the same but it's not.
I read somewhere a few years back that Google is stuck leveraging more and more ROI out of their advertising technology relative to other parts of their business. They try to branch out a little, their investment does not go as well as they thought, and so they squeeze more ad money out, making more content "sponsored", or for instance they recently doubled the number of ads you have to watch on a video.
Every new technology ever has come with significant unintended social consequences, so, we all should figure out what they are and then actively try to manage them, not shrug our shoulders and say "that's just how it is".
But we also need new theories, and a radical change in our understanding, of social network dynamics and their constituent forces and counterforces. When YouTube had that minor controversy over machine learning helping pedophiles find (what should have been unlisted) videos of young girls trying on bathing suits, they reached their hands into the system and started uncorrelating things manually, deleting videos, blacklisting search terms. Facebook has subcontracted thousands of people in Asia to examine potential "fake news", nation-state cyberoperations, etc. on their platform and remove it as necessary, as well as modifying their internal advertising against professional provocateurs.
This is not sustainable. The deepening integration of machine learning driven social networks into the interstitial matrix of our day to day lives can't be met simply by special-casing the network by hand, and only after the effects have become serious enough to cause widespread concern or worse, a sociocultural shock the kinds of which we are only just beginning to see.
Simply disconnecting is not a choice. LinkedIn has become a professional requirement. Even if you do not "tweet", you have to hear about tweets on the news. So this is our future. Hopefully it is a manageable one.