Self Referential Meta Dementia?
June 16, 2019 5:48 AM   Subscribe

Since the 60s, *every* field has become beset by specialization, fueled by increasing competition for jobs and spiraling productivity, which forces researchers to take a narrow, goal-oriented, approach to science. Those who don't do this to some degree vanish. (twitter thread)

Thread reader version.

Inciting article from Nature.

Preprint on github(pdf).


More than a half-century ago, the ‘cognitive revolution’, with the influential tenet ‘cognition is computation’, launched the investigation of the mind through a multidisciplinary endeavour called cognitive science. Despite significant diversity of views regarding its definition and intended scope, this new science, explicitly named in the singular, was meant to have a cohesive subject matter, complementary methods and integrated theories. Multiple signs, however, suggest that over time the prospect of an integrated cohesive science has not materialized. Here we investigate the status of the field in a data-informed manner, focusing on four indicators, two bibliometric and two socio-institutional. These indicators consistently show that the devised multi-disciplinary program failed to transition to a mature inter-disciplinary coherent field. Bibliometrically, the field has been largely subsumed by (cognitive) psychology, and educationally, it exhibits a striking lack of curricular consensus, raising questions about the future of the cognitive science enterprise.
posted by sammyo (10 comments total) 29 users marked this as a favorite
as somebody who was pretty much forced to decide way before I wanted to (halfway through my second last year of high school) that I was artist, not a scientist, I am always secretly, perhaps smugly, pleased to see a so-called Science fumble unto the realization that ... hmmm, maybe this particular field of knowledge can't be locked down into "... a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[2][3][4]"
posted by philip-random at 9:02 AM on June 16 [3 favorites]

... and I do like this one (from the Twitter thread):

12/ But if researchers fail to come up for air to situate their work in the arc of history, they risk measuring the contours of the hole they've dug themselves into, rather than addressing the questions that set the field digging in the first place.
posted by philip-random at 9:05 AM on June 16 [4 favorites]

How can a field of research simply start with a conclusion - the mind is a computer? Wouldn’t that already narrow the search?

Back when I was in college, there was the usual line - any field with “science” in the name isn’t.
posted by njohnson23 at 9:18 AM on June 16 [2 favorites]

I just don't understand why someone would think twitter is a good place to make a complex argument especially given that there have been a spate of opinion pieces saying that universities reward tweetable research because it is easier to announce and disseminate.

Oh. Now I see....
posted by lesbiassparrow at 9:56 AM on June 16 [6 favorites]

I have a PhD in Cognitive Science, but like many who do, it's a dual degree, because most Cognitive Science programs also assume you have a "home department" (for me it was linguistics). As such, most Cog Sci programs (note that many are "programs" and don't function as departments in their own right, which is telling) really don't support a true multi-disciplinary research framework. At best, you have the ability/time to draw together two or three fields (e.g. theories from linguistics, methodologies from cognitive psychology or neuroscience), but even so, "cognitive science research" remains fragmented, specialized, and has not resulted in a unified field with a cohesive theoretical framework about cognitive representation or the functioning of the mind/brain (which, BTW, is a better characterization of the main inquiries of the field than "the mind is a computer", IMO).

The "practicality" argument that Barner makes is spot on: given the pressures of academia (i.e. "publish or perish") the system does not currently function to foster the kind of broad inquiry that a truly interdisciplinary (and cohesive) field would require.
posted by k8bot at 9:56 AM on June 16 [13 favorites]

Yeah, I went to college intending to study cognitive science based on popular readings from the 1980s. I was dismayed to discover that my university's department had, in the intervening 20 years, become mostly a neuroscience department with a couple anthropologists, linguists, and computer scientists rounding out the roster. I majored in computer science and mathematics instead, and in my mid 30s am wishing I'd done the minor in philosophy I always kinda wanted to make time for, as it'd be helpful in my current role. There's still call for people who can straddle those disciplinary boundaries; there just aren't many people whose formal educations have prepared them adequately to do it.
posted by potrzebie at 10:39 AM on June 16 [3 favorites]

It's impossible for a scientist to even keep up with the research in a narrowly focused field.

Expecting them to be multidisciplinary is just multiplying both their workload and their ignorance.
posted by srboisvert at 10:49 AM on June 16 [3 favorites]

As an undergrad in RPI's cognitive science program (which gets a few mentions here) back in '98-'00 much of this rings true. To the basic charge of cognitive science having lost its way... the lead graphics programmer on Bioshock Infinite had a saying (paraphrasing) "when tools and methods move into the daily workflow of tech art/level building they stop being referred to as procedural content." Every time a particular topic began showing promise for generating spinoffs or research grants it was immediately appropriated by neuroscience, computer science, or psychology.

Margaret Boden's characterization "The field would be better defined as the study of 'mind as machine'... More precisely, cognitive science is the interdisciplinary study of mind, informed by theoretical concepts drawn from computer science and control theory" is pretty dead-on for a mission statement. RPI's program was even called Minds & Machines, and the requirement for participation was dual-majoring in computer science and either psychology (~85% of students) or philosophy, so I can't disagree with the relative failure of the "interdisciplinary" qualifier. As you might expect from the paper a smattering of linguistics and anthropology were covered in our classes, but those majors didn't count for degree requirements.

By the late 90s all three of the "essential original features" at the top of page 7 were no longer in evidence:

1) mental representation separate from the biological or neurological
2) central to any understanding of the human mind is the electronic computer
3) deliberate de-emphasization of cultural factors/context

Neural networking and Brooks at MIT championing bottom-up AI more or less demolished all of that, because it was immediately apparent that algorithms which mimicked the neural substrate of the human mind were far superior to prior methods at *learning*. Environment and life experiences shape neurotopology (duh), which obviously includes cultural context, so you can't discount any of that. It was also clear to anyone who spent ten minutes working with neural networks that the semantic and neural layers are completely decoupled. The paper alludes to this background a bit but vastly underrepresents how serious the emergence of neural approaches was at the time. And of course despite (or because of) Brooks' influence, even in the late 90s the older tenured professors were still fighting a tooth and nail rearguard action for top-down semantic hierarchy/symbolic representation approaches - as much as anything because of how neatly they fit into the model of functional programming. Most of why I left was related to mental illness (I dropped out having completed most of the coursework), but the continual frustration of being an undergrad pushing back on that insistence didn't help matters.

While I'd agree that cognitive science has more or less failed to cohere into a unified field with a canonical syllabus, I don't think there was ever much doubt in my mind or the minds of my fellow students about what we were trying to do: develop an abstract working model of *minds*, plural, with their generalized systems modeling (problem solving), compartmentalized internal state tracking and associativity (identity), goal-directed behavior (intentionality), and capacity for self-referential agent behavior prediction within those generalized models (group behaviors). A broader theoretical framework of which the human mind is but one instance, similar to how the x86 CPU architecture is but one instance of a Turing Machine (the abstract model for all devices which perform computation, if you're not from CompSci and made it this far).

If there's anything I regret in all of this it's Hofstadter writing in 1999 for the 20th anniversary edition preface of Godel, Escher, and Bach, "It sometimes feels as if I had shouted a deeply cherished message out into an empty chasm and nobody heard me." We heard, and the recursion inherent to agent prediction of interactions between self and other agents was always assumed because we'd all read his book. It's also why I think the modern primacy of neural networking over all other AI research was an overcorrection - an open-ended framework for minds requires solving BOTH the semantic and neural sides of the problem. But, seconding k8bot, the emphasis on publish or perish means all of this was doomed to starve in modern academia.
posted by Ryvar at 11:13 AM on June 16 [17 favorites]

Back when I was in college, there was the usual line - any field with “science” in the name isn’t.

As someone with a degree in planetary science, I resemble this remark.
posted by Four Ds at 4:40 PM on June 16

This is a thoughtful and interesting thread.

I'm not entirely convinced by the argument, though. I definitely don't read every paper published in the field with my department's name. I doubt I read 1/50th of them. But, that's okay. When something important happens, colleagues across the hall will tell me about it. And, since they know me, they'll explain why it's important without jargon in a much shorter time than it would take me to read the paper itself. All of our subfields have made progress that would have been unimaginable fifty years ago. I'm not sure that's uncorrelated with specialization. That colleagues who are doing the work closest to my own are in three other departments is a bit weird. But, we all talk to each other and collaborate regularly on proposals and papers. If you don't. . . that's your problem.

Specialization doesn't seem like such a bad thing, as long as one is also interested in the world and talks to colleagues. Perhaps we need broader conference mailing lists. I'm not sure that trying to read the hundred papers a day that someone in my building reads would be useful.

As a quantitative asshole viewing qualitative academia entirely through the stories of friends, I'm also deeply skeptical of "grand theories" as a concept. Many of them seem to be misleading and it's hard to tell the difference between the useful ones and the terrible ones except when incredibly specific specialists actually examine them on an image-by-image or phoneme-by-phoneme basis and discover that many of them aren't actually good descriptions of reality. (I'm terribly spoiled by working in a field where it's generally easy to be proven wrong. And, I recognize there are incredibly valuable fields where that is not true.)
posted by eotvos at 12:58 PM on June 17

« Older Cold War Steve   |   Concrete clickbait Newer »

This thread has been archived and is closed to new comments