follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Is Disgust a Uniquely "Conservative" Moral Emotion? | Main | A measured view of what can be validly measured with M Turk samples »
Friday
Jul192013

"System 1" and "System 2" are intuitively appealing but don't make sense on reflection: Dual process reasoning & science communication part 1

“Dual process” theories of cognition (DPT) have been around a long time but have become dominant in accounts of risk perception and science communication only recently, and in a form that reflects the particular conception of DPT popularized by Daniel Kahneman, the Nobel Prize winning behavioral economist.

In this post--the first in a 2-part series-- I want to say something about why I find this conception of DPT unsatisfying.  In the next, I'll identify another that I think is better.

Let me say at the outset, though, that I don't necessarily see my argument as a critique of Kahneman so much as an objection to how his work has been used by scholars who study public risk perceptions and science communication.  Indeed, it's possible Kahneman would agree with what I'm saying, or qualify it in ways that are broadly consistent with it and that I agree improve it.

So what I describe as "Kahneman’s conception, while grounded in his own exposition of his views, should be seen as how his position is understood and used by scholars diagnosing and offering prescriptions for the pathologies that afflict public risk perceptions in the U.S. and other liberal democratic socieities.

This conception of DPT posits a sharp distinction between two forms of information processing: “System 1,” which is “fast, automatic, effortless, associative and often emotionally charged,” and thus “difficult to control or modify”; and “System 2,” which is “slower, serial, effortful, and deliberately controlled,” and thus “relatively  flexible and potentially rule-governed.” (Kahneman did not actually invent the “system 1/system 2” terminology; he adapted it from Keith Stanovich and Richard West, psychologists whose masterful synthesis of dual process theories is subject to even more misunderstanding and oversimplification than Kahneman's own)

While Kahneman is clear that both systems are useful, essential, “adaptive,” etc., System 2 is more reliably connected to sound thinking.  

In Kahneman’s scheme, System 1 and 2 are serial: the assessment of a situation suggested by System 1 always comes first, and is then—time, disposition, and capacity permitting—interrogated more systematically by System 2 and consciously revised if in error.

All manner of “bias,” for Kahneman, can in fact be understood as manifestations of people’s tendency to make uncorrected use of intuition-driven System 1 “heuristics” in circumstances in which the assessments that style of reasoning generates are wrong.

Human rationality is “bounded” (an idea that Kahneman and those who elaborate his framework take from the pioneer decision scientist Herbert Simon) but how perfectly individuals manifest rationality in their decisionmaking, on Kahneman’s account, reflects how adroitly they make use of the “monitoring and corrective functions of System 2” to avoid the “mistakes they commit” as a result of over-reliance on System 1 heuristics.

This account has attained something akin to the status of an orthodoxy in writings on public risk perception and science communication (particularly in synthetic works in the nature of normative and prescriptive “commentaries,” as opposed to original empirical studies).  Popular writers and even many scholars use the framework as a sort of template for explaining myriad public risk perceptions—from those posed by climate change and terrorism nuclear power and genetically modified foods—that, in these writers’ views, the public is over- or underestimating as a result of its reliance on “rapid, intuitive, and error-prone” System 1 thinking, and that experts are “getting right” by relying on methods (such as cost-benefit analysis) that faithfully embody the “deliberative, calculative, slower, and more likely to be error-free” assessments of System 2.

This is the account I don’t buy.

It has considerable intuitive appeal, I agree.  But when you actually slow down a bit and reflect on it, it just doesn’t make sense.

The very idea that "conscious" thought "monitors" and "corrects" unconscious mental operations is psychologically incoherent.

There is no thought that registers in human consciousness that wasn’t, an instant earlier, residing (in some form, but unlikely one that could usefully be described as a “thought” or at least anything with a concrete, articulable propositional content) in some element of a person’s “unconsciousness.”

Moreover, whatever yanked it out of the stream of unconscious “thought” and projected it onto the screen of consciousness also had to be an unconscious mental operation.  Even if we imagine (cartoonishly) that there was a critical moment in which a person consciously “noticed” a useful unconscious “thought” floating along and “chose” to fish it out, some unconscious cognitive operation had to occur prior to that for the person to “notice” that thought, as opposed to the literally infinite variety of other alternative stimuli, inside the mind and out, that the person could have been focusing his or her conscious attention on instead.

Accordingly, whenever someone successfully makes use of the “slower, serial, effortful, and deliberately controlled” type of information processing associated with System 2 to “correct” the “fast, automatic, effortless, associative and often emotionally charged” type of information processing associated with System 1, she must be doing so in response to some unconscious process that has reliably identified the perception at hand as one in genuine need of conscious attention.

Whatever power “deliberative, calculative, slower,” modes of conscious thinking have to "override" the mistakes associated with the application of “rapid, intuitive, and error-prone” intuitions about risk, then, necessarily signify the reliable use of some other form of unconscious or pre-conscious mental operations that in effect “summon” the faculties associated with effortful System 2 information processing to make the contribution that they are suited to making to information processing.

Thus, system 2 can’t only reliably “monitor” and “correct” System 1 (Kahneman’s formulation) unless System 1 (in the form of some pre-conscious, intuitive, affective, automatic, habitual, uncontrolled etc mental operation) is reliably monitoring itself.

The use of System 1 cognitive processes might be integral to the “boundedness” of human rationality.  But how close anyone can come to perfecting rationality necessarily depends on the quality of those very same processes.

The problem with the orthodox picture of deliberate, reliable conscious,"System 2" checking impetuous, impulsive "System 1" can be called the “system 2 ex nihilo fallacy”: the idea that the form of conscious, deliberate thinking one can use to "monitor" and “correct” automatic, intuitive assessments just spontaneously appears—magically, “out of nothing,” and in particular without the prompting of unconscious mental processes—whenever heuristic reasoning is guiding one off the path of sound reasoning.

The “System 2 ex nihilo fallacy” doesn’t, in my view, mean that dual process reasoning theories are “wrong” or “incoherent” per se.

It means only that the truth that such theories contain can’t be captured by a scheme that posits the sort of discrete, sequential operation of “unconscious” and “conscious” thinking that is associated with the view I’ve been describing—a conception of DPT that is, as I’ve said, pretty much an orthodoxy in popular writing on public risk perception and science communication.

In part 2 of this series, I’ll suggest a different conception of DPT that avoids the “System 2  ex nihilo fallacy.”

It is an account that is in fact strongly rooted in focused study of risk perception and science communication in particular.  And it furnishes a much more reliable guide for the systematic refinement and extension of the study of those phenomena than the particular conception of DPT that I have challenged in this post.

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93, 1449-1475 (2003).

Simon, H.A. Models of bounded rationality (MIT Press, Cambridge, Mass.; 1982).

Stanovich, K.E. & West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23, 645-665 (2000).

Sunstein, C.R. Laws of Fear: Beyond the Precautionary Principle. (Cambridge University Press, Cambridge, UK ; New York; 2005).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (17)

This - Part 1, above - is a very perceptive comment. I await with bated breath - seriously! - for Part 2.

Tentative thought: I think it makes some sense to think that the movement from unconscious (tacit) thought (reasoning, cognition, etc.) to conscious thought reasoning, cognition, etc.) is (in part) a continuum. But there are also problems with this continuum vision of the relationship between type 1 mental (or cognitive) processes and type 2 mental (or cognitive) processes. For example, some type 1 processes are presently largely impenetrable to explicit analysis and - at the other end of the spectrum - there are, it seems to me, unquestionably some forms of conscious thought (e.g., calculus, quantum mechanics) that seem not just to "perfect" tacit, inherited thought but (in some domains) transform or supersede it ( at least to a substantial extent). Cf. my tentative take on the relationship between tacit and explicit mental processes in part 3 of my essay http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1079235

July 19, 2013 | Unregistered CommenterPeter Tillers

I don't know that I agree that "System 1" logically and necessarily prompts all use of "System 2." It can't be the case -- and I don't think you're saying -- that all unconscious processes are System 1 and all conscious thought is System 2. As you suggest, deliberate thinking involves many unconscious processes for every conscious thought, so when we try to solve a math problem, rewrite a paragraph, and perform other System 2-ish tasks, we employ plenty of unconscious thinking. But these unconscious processes could very easily be a second system -- a system of unconscious processes that go into slow, deliberate thought (System 2), separate from the system of unconscious processes that go into fast, hot, instinctive thinking (System 1).

But you're arguing that the first system of fast unconscious processes must be what cues us to fire up our System 2 faculties. I don't see why this should be the case. The argument seems to assume that we spend our lives in a purely System 1 mode, but if we're already in System 2 mode, we could use System 2 to evaluate whether or not to evaluate additional information using System 2 (that was clear, right?). For example, if I see a new blog post, that could "trigger" me to use System 2 to evaluate whether or not to engage in the System 2 activity of reading and thinking about the blog post. Yes, I suppose there's some recursion here, and something initially has to get my attention (I don't even know if the trigger of "words! I should read them!" is properly considered System 1), but that thing doesn't have to be particularly fine-tuned or reliable. It doesn't have to be what separates the information we really engage with from the information we don't. I anticipate agreeing with the anticipated Part 2 in the series -- these Systems do in fact feed into each other -- but I don't know that they logically have to to the extent they appear to in practice.

July 20, 2013 | Unregistered CommenterMW

Dan,
Thank you so much for this post.
I agree with you about System 1 and System 2. I think that I also agree with Kahneman's details.
I come at System 1 and System 2 from a different point of view. I start with single cells and build up from there (currently at millions of 'neurons' and hundreds of millions of 'synapses'). In this approach, System 1 and System 2 don't really exist and to the extent that I want to pretend they exist, they have indistinct and fluid 'boundaries.' The ideas of System 1 and System 2, as Kahneman said, are useful heuristics to help us to ask better questions. Also 'conscious,' 'unconscious,' 'rational,' and 'irrational' have well defined meanings at the cellular level. These meanings, however, are not the meanings of popular literature or even of scientific literature in many fields. For instance, the underlying biochemistry of each cell is completely rational in support of thoughts that we call, at a large level, rational or irrational.
We are in the middle of laying out all these concepts in rigorous detail. We need people to collaborate with, especially those who will ask hard questions and help us to find out where we clearly have no idea what we are talking about.
If you or any of your readers want to have conversations in more detail or start projects together, please contact me.
As an aside, what we think we understand extends into psychology and cultural cognition and makes predictions in those fields. So far, these predictions have been correct.

July 20, 2013 | Unregistered CommenterEric Fairfield

@MW:

I suppose there could be 3 systems or 4 or more if we assume that there are unconscious not in 1 and conscious not in 2, etc.

But I don't see anything at stake in doing a census of System 1 relative to the universe of unconscious dispositions.

The only point that I think is of consequence is that it is incoherent to think that the "conscious, reflective" processes of thinking that the orthodox view treats as distinctive of System 2 operate w/o effective summoning and guidance by unconscious processes.

Accordingly, high quality "System 2" reasoning depends, necessarily, on high-quality unconscious processes.

I don't think the "orthodox" view can handle that proposition. But if someone wants to do through the argumentative & interpretive exegetical exercise necessary to extract this from the orthodox "system 1/system 2" accounts (the ones dominate behavioral economic accounts of risk perceptin & science communciation), fine, let them.

B/c once they do, they will have a descriptive account that not only matches what I think to be the best one of how unconscious & conscious mental operations figure in the perception of risk & like policy-relevant facts. They will have one that can'be reconciled with all the explanations, predictions & prescriptions that are generated by those who rely on the orthodox conception.

It's the myriad misunderstandings of science communication that the orthodox conception reflects & propogates that motivate me to critique it. I'm interested in engaging the world not interpretive play w/ words & texts.

July 20, 2013 | Unregistered Commenterdmk38

@Eric:

I'm gratified that you found the post useful while being upersuaded by it! Obviously, one wants to convince others one is right. But if one can't say things (and in particular do empirical work that generates findings) that those who disagree recognize as pertinent, scholarly discussion is impossible.

I of course agree w/ you that Kahneman, and anyone who is relying on his account in a way that is faithful to it or simply interesting, has to be understood to be using "System 1" & "System 2" as "heuristics," or essentially as simplified "models" of processes & not as accounts of "things" that exist in the brain etc.

There are lots of complicated unobservable processes going on; to make sense of them, we need to figure out things we can observe that would help us to draw inferences about how those processes operate; and for helping us to figure out what sorts of observations would support what sorts of inferences, we will need to use simplified models. This is how all valid empirical work works-- not just in the social sciences but in the natural sciences, too; do the Feinman diagrams describe real "paths" that quantum "entititles" take?! he certainly didn't believe anything so silly!

Things won't be any differdent when we study the brain either: even if you can see *differnt" things w/ instruments that measure activity in the brain than when you measure behavioral responses caused by what's going on "in" the brain, the "activity" you are looking at is still merely an observable indicator of the process you are interested in; it's not the process, which remains unobservable. You need a model that connects the thigs you measuring "in the brain" to the processes. No escaping this. (Actually, all psychologists are engaged in studying the brain; some collect measurements of indicators "in" the brain & others of indicators in behavior. The question then becomes -- well, what sorts of indicators are the most helpful: which support the best explanations, predictions, prescriptions, etc? Only one way to find out-- do your thing & we'll see. But my guess is that the best strategy will always to be combine results from different valid froms of obsevatoin & put the most confindence in findings that are convergently supported by them.)

I understand the various components of "cultural cogniton"-- including the "worldviews" -- in this spirit & have said so.

So I wouldn't make a criticism of any conception of DPT that amounts to saying that things like system 1 & 2 "don't really exist." Of course, they don't.

But the question is whether they supply a reliable model of the processes we want to understand. That turns largely on whether they generate explanations, predictions & prescriptions that can be borne out by empirical testing, so it is fair to say, "Let's just look at evidence."

But in fact, a model that is conceptually incoherent won't reliably help us to make sense of evidence we see. If the model is incoherent, then the inferences it suggests either won't make sense or won't be unique w/r/t to evidence. Necessarily, too, such a model will cauase us to spin our wheels as we make use of it to try to design studies that would generate the sorts of observations from whcih we can draw valid, reliable inferences.

That's the sort of critique I'm suggesting. I'm not done yet. But it certaoinly is the case that the entirety of the critique will connect the conceptual flaws I'm identifying to discrepancies between the orthodox DPT account & empiridcal evidence.

July 20, 2013 | Registered CommenterDan Kahan

A short thought that may be useful.
In our view, for a given task, there is often a small set of neural firings that will allow a person to accomplish that task quickly. If the task is not accomplished correctly, other neurons are recruited to accomplish the task. Certain tasks always require lots of neurons to be done correctly.
We can call the set of few neuron tasks System 1 and the set of many neuron tasks System 2. Things get complicated because a neuron that is in System 1 for one task is often in System 2 for thousands of other tasks and, for a particular task, whether a particular neuron belongs to System 1 or System 2 (or some subsystem) will vary over time. Some neurons, such as those on the skin that detect heat, don't really belong to either system but contribute to both.

July 20, 2013 | Unregistered CommenterEric Fairfield

I don't think there are distinct systems 'System 1' and 'System 2' either. I think 'System 2' is really the application of a lot of intuitive 'System 1' steps chained together in succession. Each individual step is intuitively 'obvious' to us, but selected from a relatively small set of logical building blocks, as well as a big database of memorised observations and prior conclusions. But there are many more combinations possible when they are put together.

I'm not sure I agree that 'System 2' is always more reliable than 'System 1', either. While 'System 2' is far more powerful in its capabilities, so that conclusions that are beyond the reach of 'System 1' that have had to be crudely approximated can be improved upon with the use of a more general method, at the same time each step in the chain is fallible, and the more steps there are in the chain of reasoning, the less likely it is that you will have got to the end without a fatal error. If the probability of correctness in each step is 90%, you can chain together only half a dozen steps before error is more likely than not. Even with 99% accuracy it takes only about 70 steps. When arguments are based on the conclusions of previous arguments which are themselves dependent on earlier arguments still, the steps soon mount up. It requires the *most rigorous* care, of a sort that most people find alien and exhausting, to come to reliable conclusions in chains of more than a few dozen steps. That's why people hate mathematics classes at school.

The shorter chains with the most intuitive steps, founded on re-use of the visual or verbal modules of the brain rather than rules learnt formally in education, are often the most reliable. While formal rules of logic should in theory be better, in practice they are too often misremembered or misapplied. On the other hand, geometric and verbal intuitions are a lot harder to correct when they do go wrong, as it is easier to think you misremembered than to disbelieve your lying eyes. Paradoxes in mathematics and physics illustrate how we struggle to let 'System 2' override 'System 1', even after extensive education.

When 'System 1' is directly applicable, I'd say it was usually more reliable, but it only works for certain sorts of problems. With a lot of training and practice, mathematicians can make long chains of 'System 2' reasoning reliable too, but at the cost of severely limiting the topics it can reason about and the methods it can use.

July 20, 2013 | Unregistered CommenterNiV

@NiV
You are making good insights, from our perspective. A few things for you to think about.
1. The length of the chain is not proportional to its accuracy. The amount that the chain is used is a better measure of its accuracy.
2. It is not clear what problems System 1 can or can't solve. For instance, after reading Kahneman and others I tried to do mathematics in my head without going through the standard calculation steps. I found out that I can do it accurately and well. I have no idea what I am actually doing. I just know that whatever I am doing is very fast and accurate. The process feels like System 1.
3. The idea of logical building blocks doesn't really exist in a brain. A better way to think of it is most likely building blocks that are statistically true.
4. The big database of memorised observations and conclusions does not seem to exist at the cellular level. What exists is an interesting processing system.
Hope these insights stimulate discussion and thought.
(I am trying to be useful but have to be a little cryptic because this is an open forum and we have not published this stuff and, more often, because the actual answer is really long and requires a lot of background.)

July 20, 2013 | Unregistered CommenterEric Fairfield

Eric,

I'm not sure what you mean by point 1. I agree the length of a chain is not proportional to its accuracy. The simple analysis I suggested here would say it was proportional to the logarithm of its accuracy. I don't know what you mean by "the amount the chain was used", or why it would be related to accuracy.

2. It depends what sort of mathematics. Some people can do mental arithmetic very quickly, and apply the steps smoothly without having to think about what comes next. But that's rather different from doing proper mathematics on a problem you are not familiar with.

3. I'm not sure why you can't have both. And there are logical steps that are only statistically true - think of a Bayesian Belief Network.

4. I'm not sure why you would think the big database should exist at a "cellular level", assuming you mean something like one cell per fact or something. Associative memory isn't a difficult problem. Kohonen networks prove the principle, although I doubt real brains are anything like so simple.

July 20, 2013 | Unregistered CommenterNiV

For NiV, (if others think this is off topic, let us know and we can take the discussion elsewhere. Also, if things aren't clear, ask for clarification.)

1. For neurons and synapses, logarithm of the length does not obviously apply. Also since each step in the 'chain' is getting statistically weighted inputs from hundreds of neurons and given statistically weighted outputs of varied strength to hundreds of other neurons, it might be more accurate to talk of 'sets of interrelated chains' and not get an image of 'a chain.' The amount the chain was used is related to the size of each link, which varies over time and usage.
2. I sort of agree with your answer about 2. I have not probed how far the System 1 effect goes. For me, I have taught myself to do 'proper mathematics' in very strange spaces in hundreds of dimensions and my intuition is very fast and almost always right. I have no way to know what each of my neurons is up to when I am doing this kind of mathematics.
3. You can have both. Brain function, as we understand it, maps to Bayesian Belief Networks. A problem in explaining it is that the posterior odds for a given action can be a scalar and the priors can be listed as a million to billion component vector but the likelihood function is a time dependent, 'billion dimensional', tensorlike thing that has mathematically well behaved properties but is hard to talk about.
4. Thanks for reminding me of Kohonen networks. They are a useful starting point for talking to certain audiences and I had forgotten about them. You are right though. Brain networks are dramatically more complex than Kohonen networks. If they weren't we would not survive. We would be immobile snack food for other organisms.
Thanks for the feedback.

July 20, 2013 | Unregistered CommenterEric Fairfield

Eric,

1. Ah! The single steps I was talking about weren't at the level of single neurones, but a much higher level.

2. That sounds fascinating. You have an intuitive grasp of hundred-dimensional geometry? Or do you just mean concepts expressible with lists of a hundred or so numbers?

July 20, 2013 | Unregistered CommenterNiV

NiV,
I thought your single steps might be bigger than mine. Many people's 'single steps' are millions of neurons. I felt that the only way for me to understand these bigger steps was to start with single neurons and work my way up to bigger and bigger collections, carefully.
As to hundred dimensional spaces, yes, at least for a lot of them. Not hundred numbers but hundred dimensional spaces with odd shapes and odd distance measures. But then to understand some of the problems that I care about I have to think in these spaces. I have practiced a bit. ;-)

July 20, 2013 | Unregistered CommenterEric Fairfield

Cultural cognition throwdown.

I am now the lone Westerner, I think, in a group of 4,500 biotechnologists. They are in their 20s and 30s. I am trying to impart knowledge and not create polarization. Any suggestions for how to do this would be appreciated.

July 21, 2013 | Unregistered CommenterEric Fairfield

@Eric:

do you find that the sort of cultural style or outlook associatd with being from the Far west-- I assume the others are either east or west *coast*-- still figures in interactions among those who share the professional outlooks (habits of mind, but elements of cutlural style, too, no doubt) distinctive of those in biotech field? If so, what sorts of differences in perception or the like does it give rise to?

I myself believe strongly that professional habits of mind tend to crowd out the "cultural cognition" effects within the close confines of the domain in which the relevant professionals operate.

I'm sure this will cause @Larry to come at me (unless he is growing tired of the pursuit).

July 21, 2013 | Unregistered Commenterdmk38

I myself believe strongly that professional habits of mind tend to crowd out the "cultural cognition" effects within the close confines of the domain in which the relevant professionals operate.

I'm sure this will cause @Larry to come at me (unless he is growing tired of the pursuit).

No, I can see that the gap isn't closing, Dan.

But, speaking of "professional habits of mind" within operative domains, wouldn't your belief be an interesting one to actually test? An alternate belief/hypothesis: cultural cognition effects may be less noticeable among professionals than among the general population, but will be an increasingly significant factor as "the domain in which the relevant professionals operate" approaches the domains in which cultural cognition itself is more relevant (e.g., political/policy areas, ethical/moral values, etc.). I would leave it to professionals in this domain to come up with a suitable experimental design.

July 21, 2013 | Unregistered CommenterLarry

@Dan and Larry,
Interesting question.
I have worked in Oregon, Pennsylvania, Long Island, Boston, San Francisco, Knoxville, TN, Los Alamos, and a few other places. It seemed that the places that I worked in had a local culture but that the culture was driven by the personalities of the people who founded the place not the location of the place in the country. The founders, as far as I remember at the moment, were non local.
Biotech may be an outlier field for this question because it is such a new field. It you have been in it since the '80s, you are a founder. Many of the current biotech hot beds are less than 15 years old. Also, biotech and biochem departments were not a small change to the chemistry and biology departments that preceded them but were created de novo and were a huge change, as neuroscience departments are now.

July 21, 2013 | Unregistered CommenterEric Fairfield

In reality there is continuum between the different functions of the brain. The dichotomy is highly artificial, and usually has only served by authors, to manipulatively suggest the alleged 'superiority' of 'System 2' processes (except in the much much fairer, Kahneman case). Interestingly, what is referred to as System 2 (essentially the Working Memory system), is highly limited in it's ability to handle complexity. As problems become larger, System 2 actually calls on System 1 (essentially, the long term memory system), to store and extract more information. System 2 can only readily handle problems which contain purely 'logical properties, but is responsible for inspecting the entire range of problems. If System 2, encounters a problem which contains associative properties (AS well as some logical property), it will need to call on System 1 , which will engage in heuristic methods, in search of hypothetical solutions. These associated information will then be fed back into System 2 to be evaluated for logical consistency. In a small problem, hypothetical thinking can be restricted to System 1, but larger problems will involve System 1 (LTM) processing - either through declarative recall, or associative thinking. Both systems handle 'abstraction', in the sense that they aid in deriving solutions which are not explicit.

November 13, 2013 | Unregistered CommenterTJ Henkle

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>