Topics on my mind: January 2018
My degree has forced me to start learning some linguistics, which turns out to be very interesting. It feels like, in trying to figure out what understanding language really means, we're grappling with the very notion of concepthood, and the nature of intelligence itself. My thesis is based on the question of how to represent words in machine learning models. Vectors seem to work pretty well, and make intuitive sense for nouns and verbs at least. In this model, if you take king, subtract man, and add woman, you get queen. Something can be more or less 'rain' (drizzle, shower, downpour, torrent), or more or less 'run' (jog, lope, sprint), or even more or less 'bird' (ostrich, penguin, vulture, sparrow). Things get a little more complicated when we consider determiners, conjunctions, and prepositions, since you can't really be more or less 'for' or 'or'. And when it comes to putting it all together into representations of sentences, we really have no good solution. Yet.
I'm becoming more sympathetic to continental philosophy. I found this lecture on the psychology of religion by Jordan Peterson fascinating. While difficult to summarise, Peterson's main claim is roughly that the last two millennia of religious development have selected for narratives which resonate deeply with human psychology, and that if we want to understand the latter we should pay more attention to the former; he is particularly inspired by Nietzsche and Jung. I feel like I should also try to read some Marx and Freud, if only for the intellectual background (also because so many people wanted me to add their work to my list of humanity's greatest intellectual achievements). This continues a slow swing back from my high school years, in which I was very focused on technical subjects and knew nothing about history, art, or languages. Now I know quite a lot about history and a fair bit about art, but still no other languages. :(
I'm also coming to believe that I, and almost everyone else, have underestimated the importance of sociology. I don't think that governments should necessarily be trying to steer culture, but it's definitely worthwhile for policymakers to be aware of what is needed for communities and societies to thrive. Of course, part of the problem is that sociology isn't nearly as systematic as, for instance, economics. But neither is psychology, which researchers like Kahneman and Tversky have nevertheless managed to ingrain into the public consciousness. I'm hoping that work of equivalent importance will arise in sociology at some point - and that it includes not just descriptions of how society functions, but also prescriptions on how to change it. The reason I'm warming to continental philosophy is because it attempts this normative analysis of large-scale culture - but while some of its insights seem important, they're not backed up by enough data for me to fully trust them. By contrast, Putnam's excellent book Bowling Alone, on the decline of social capital in America over the last 70 years, makes claims which are also very broad, but thoroughly rigorous. The amount of data that he had to manually sift through made writing the book a massive task. But as more and more data becomes available from the tech sphere, I'm hoping to see many more people using big data to answer sociological questions with a philosophical mindset. Christian Rudder, founder of OKCupid, is going in the right direction with his book Dataclysm, even if the implications of his conclusions aren't always teased out. There are also a number of economists, such as Tyler Cowan and Bryan Caplan, who are doing interesting analysis along these lines, albeit focusing more on economic claims.
I'm worried that we're simply clueless about how our actions will affect the far future. It seems like for every convincing argument I read, I later stumble upon a counterargument of even greater importance. For the last few years I've been very worried about the prospect of human extinction. But I recently watched the latest Black Mirror episode Black Museum, which is about how sufficiently advanced technology can allow you to cause others arbitrarily large amounts of suffering. If humanity survives, it seems extremely likely that we'll reach that level of technology eventually. And if people can use it, then someone will. Perhaps a very powerful, benevolent authority could prevent this, but that situation seems unlikely. How many expected years of torture would it take to outweigh the moral value of humanity surviving? When viscerally confronted even with fictional suffering, it feels like the answer shouldn't be that high. Either way, how could we have any estimate of the probabilities involved that isn't simply a stab in the dark?
Post a Comment