In America, there is a large stigma against mental illness. While I believe that people recognize that mental illness exists, there is a stark resistance to accept it. This resistance stems not from others but from recognition of the problems within ourselves. We fear being called “crazy,” being unaware of our mental facilities, being broken. The culture around us paints a nasty picture of people suffering from mental illness, even though it is likely that at some point during our lives each and every one of us will suffer too.
Having a mental illness does not mean you are broken, or crazy, or wrong. Quite the contrary, it is part of the human experience, and just like seeking help for a physical malady or disease, we should seek help when we need it. Our mind is ours and ours alone, and we have to live with it every day, so it makes good sense to take care of it.
It’s hard. It’s not like going to the doctor, taking some medicine, and all of a sudden feeling better. Treating mental health takes work, work that is uncomfortable, work that makes us wish we didn’t have a problem. But it’s so, so worth being comfortable in our own skin.
One thing I think we should recognize in the grand scheme of shifting our attitudes towards mental health is that animals, from our cousins the great apes to our best animal friends, suffer from mental illnesses as well. And they do not have the luxury to seek out professional help on their own. They typically suffer, and that’s unfortunate, because simply googling “dog” and “OCD” can bring up a deluge of “cute” videos, but it isn’t until watching several dogs chase their shadows all day, or pace up and down, or bite and chew themselves to the bone, that this suffering stands out. We draw a lot of inspiration from animals, and we model much of our understanding of life’s complexities off them. So, while we seek help for ourselves, we should seek help for them, and realize that we are all on the same boat.
Origins of Language – “The Hardest Question in Science”
Today in class, I began my lecture series on the origins of language, with three questions in mind:
Where did language come from?
What are the requirements for language?
How do we know so much about our own language without being explicitly taught?
And it occurred to me that for all three of these questions, we can only give partial answers to our students, as we don’t have enough evidence to fully support one theory over the other.
To address the first question is nearly impossible without a time machine. While that’s a bit hyperbolic, there is some truth to the notion that we have very little to work with in discovering what early human first muttered some word or utterance: no one wrote it down, and I can’t dig in the ground and find the first language ever. We’ve constructed some methods to get fairly close at an answer, from looking at the fossil record for larynx locations in early humans to using the Comparative Method in Linguistics, but none really get us a concrete answer to the question of where did language start?
The second question is also rather complex, since defining what “language” is differs from linguist to linguist. Even if we all agree on one definition, the question then arises of what conditions do we need to start a language, and even more basic, what are the components of the mind and brain that comprise a language? Some effort has been constructed to list off other cognitive faculties that a language might require, for instance deixis (the ability to point to, say, “me” and know what that means in context), and theory of mind (the idea that I know I can think and that others can think as well), but it is not clear how long this list of features needs to be to be the necessary and sufficient conditions to make a language.
Finally, the last question is most easily addressed with a fair amount of confidence, as it speaks to a problem Chomsky coined “Plato’s Problem” or the “Poverty from the Stimulus.” Here, the story comes from Socrates prying from an uneducated boy some basic geometric properties, but asks the basic question, “how do we know so much information without being explicitly taught?” For Socrates, how can an uneducated person deduce geometric relationships, and for the linguist, how do we know when a sentence is acceptable or grammatical when no one sat us down and went through every sentence to show us which were good and which were bad? How do we know, in English, to insert swear words perfectly into other words without much effort? For Socrates, the answer was reincarnation and knowledge being passed on from your ancestors; the modern solution isn’t far off. The modern solution is that we have innate knowledge at birth: we have the capacity to absorb human sounds (or sights) and put meaning and structure to them. We have innate knowledge of the space around us, emotions, language, etc.
Now, some may discredit innate knowledge as reincarnation+pseudoscience, but the evidence shows us that we are not tabula rasa at birth. We are instead equipped with a set of knowledge that we can use as we grow up. We can deduce what “near” means when placing a marble near a box and know that once the marble is inside the box that it is no longer near, even though, arguably no one taught us that. We can figure out how to walk, to talk, to throw objects across the room (performing rather complicated physics) and have a relative knowledge of where it will land, all without explicit conditioning or learning. We can try all day to assume that everything we know is taught to us, but to pursue this endeavor would be to purposefully ignore all the amazing abilities humans have and often take for granted.
I recently had a discussion with some students on avoiding Type I and Type II errors in their research. However, it soon became clear that after some discussion, no one in the room knew what theses errors were, and upon further digging, most of the terms many scientists talk about e.g. p-values, central limit theorem, ANOVA, etc., are just that: talked about, not understood.
This isn’t a new problem, in that we all “fake it ’til you make it,” but I think we should start pushing a much more sensible narrative for new grad and undergrad students: it’s okay not to know everything, and if you don’t know, find out. I would say ask, but if you’re a fourth or fifth year grad student asking your advisor what a p-value is, there may be some social consequences, yet this shouldn’t scare you away from finding out about what these things mean. Swallow your pride for a second and learn about it, because it’s much worse to find yourself in a situation, say an interview, where they ask you a basic question from your field, and you struggle to piece it together.
Blaming doesn’t solve much, but what I will say is that another contender for adding to the trend of claiming you know certain terms, but don’t quite understand them at their core, is the advent of software that does everything for you. Now, I’m not saying we should go back to calculating ANOVAs by hand or by using old text books to look up z tables, but when training your students, teach the theory and root of the concept, don’t just have them punch it into SPSS or use an old R script to get the job done.
For peace of mind: type 1 errors are false positives, meaning that you THINK some effect is there, but there isn’t, and a type 2 errors are false negatives, meaning that you THINK some effect is NOT there, but there actually is.
Anyway, I guess my point is that yes, sometimes you gotta fake it until you make it, but it’s okay to spend some time each day checking in with yourself to see if you’re really confident about the terms you sling around all day. And, if you find yourself caught red-handed in an interview or meeting, simply admitting that you don’t know and are willing to learn is gonna be better for you and everyone around you.
Anchoring Bias – A problem in the age of information
In my classroom, I always warn students about cognitive biases that could affect them as students and scientists, and one in particular, anchoring bias, is an incredibly prevalent problem in the modern social landscape. Anchoring Bias, or simply anchoring, is when someone tends to “anchor” themself to the first piece of information they hear, making a decision off of the one (and typically only) source.
For some context, imagine a student browsing Facebook or some other social media of choice and seeing an article about how we eat at least eight spiders per year (not true, by the way). Even though, at our fingertips, we have the power to dispel or confirm almost any statement we hear, we don’t tend to do this. Instead, a more common reaction is to simply like, upvote, and share in a blind state of misinformation.
Why do we not bother to check out every claim we see? Is there just too much information out there, or are we too implicitly trusting of our social media sites? Is it just easier to look at one thing, and if it conforms to my beliefs, run with it (confirmation bias)? Or, is it that participating in social media makes us feel good, or feel at least something, from disgust (“eww, we eat that many spiders?!”) to a feeling elation after looking at a cute animal (“OMG look at this baby hippo! It’s #adorbs #petmaterial”). Regardless of the level of truth of the information we are spreading, participating in social media makes us feel connected, so perhaps this is partially why we don’t really care what’s being read/shared, we just want to share stories/articles/memes with others.
It is unclear why exactly we don’t bother to check out each claim, but I imagine the answer addresses many of the questions posited here. However, there is one way to help combat this problem, and that is to remind students to be skeptical, to not rely so much on the first answer you encounter, and to question “scientific fun facts” they find on Facebook; while they may be fun, they may be entirely untrue, making them just funny sentences. Reminding others that the first thing you see on the internet may not be true seems trivial, but sometimes we really need these reminders.
When it comes to speaking in public, for many, it induces panic and sweat. Those are never fun things to have while standing in front of a group of people, so whether you’re a student looking to give a presentation, a new employee presenting a report for your boss, or merely ordering at a restaurant, follow these three concepts and you’ll crush it. And then crush it again.
Okay, so obviously having confidence in yourself is crucial to a great presentation. But, what’s vitally important that many miss out on is confidence in the material you’re presenting. Say you’re talking about some set of data that (a) you may think is bullshit or (b) you don’t really know. If you don’t at least take the time to study or to at least feign interest in your topic, it really shows, and it really translates to a bad time.
Know your material front and back without the usage of note cards or slides, and if you don’t like it, follow the old adage, “fake it ’til ya make it.” When you present yourself as an expert, even though by all definitions you may not be, people really play off of that confidence and respond accordingly.
What about if you don’t feel very confident yourself? Like, “Ahh, I’m a big ol’ dummy,” or, “I am awful at this.” Already, you are putting yourself in a negative mind set, so you want to change this at the start. Begin by saying something to yourself like, “I know this material, and I’m gonna rock this presentation.” Even if you don’t truly believe it, by saying it out loud, it will help shift your negative losing mindset to a more positive one. If you go into it with a losing attitude, you’re gonna lose.
The best speeches are memorable not necessarily on the content, but by how the deliverer, well, delivered it. There are two ways to make yourself sound like a confident and expert speaker that many folk overlook.
First, it’s okay to take a pause. Having natural silence in a conversation is not as scary as you may think it is, nor is it as long and awkward. Moreover, it may even add dramatic effect to a point you’re delivering, or leave people a moment to ponder. The go to solution to avoid this pause is to insert linguistic noises like “ummmm,” and when you begin instead to fill these pauses with, “uhh, umm, well, see, uhh,” you lose crediblity and crowd interest with every single utterance. You sound less confident, organized, prepared, and ecstatic about whatever you’re presenting.
Secondly, slow down. Speak slower than you “think” sounds natural, it will translate very well to a crowd of people, especially when that crowd begins to climb into the double and triple digits. When you slow down, it gives you more time to process your thoughts, allows for more natural pauses, and forces you to enunciate your words to more clearly ar-ti-cu-late what is going on.
The best way to practice? Record yourself. And I mean both audio AND video. You will probably cringe if you’ve never done this, as people generally don’t like the sound of their own voice. But, fight through that! Because by recording yourself giving a practice talk, you can immediately begin to pinpoint what’s going wrong or right and will naturally adjust accordingly. What’s especially important here besides vocal delivery is physical delivery: do you move around too much, use your hands too much, tap nervously, etc. These are behaviors that distract from your material, but don’t beat yourself up too much. Take some time to practice and recognize you won’t become an instantly great speaker overnight by recording, but begin to tease apart your behaviors and make adjustments accordingly.
This isn’t to be mean to yourself, it’s to look at yourself and identify both positive and negative traits in your speech. Go in judgment free, or have a close friend help out, and you’ll see your skills improve wonderfully.
Okay, the last part of giving a good speech is to remember who you’re talking to. Tailor your word choices, prose, and presentation materials to your audience. Are you speaking to a ton of Ivory Tower folk, or to a general audience , or to your boss, your friends, family, etc.? Depending on who is out there, give your talk TO them, not just in front of them. Speak to your audience, play off their reactions and understand them as well.
For example, if you happen to crack and awkward joke and no one laughs, move on. And, don’t try to say it again later, you may get a pity laugh, but you know that they probably don’t dig that particular sense of humor.
If your audience is really enjoying your talk, you’ll know it, and if they aren’t, you’ll also know it, too. In the moment, it can be hard to know why they may or may not like it, but try your best to take that pause and get sorted to move the train back on the track (well, if it got derailed).
What’s the best way to ascertain this knowledge? Practice. Practice giving talks to many different groups of folks and adjust your content accordingly. Over time, you’ll figure out what works for who and apply it in an apropos fashion. There is no blanket answer for any particular audience, as each one will be different, but this is a skill that comes with practice and patience, and once you master it, you’ll be wowing audiences for years to come.
Tl;dr – I gave a speech about speeches this year and was recorded, so take a listen if you’re interested.
Our lab at MSU started from a few boxes of equipment in an empty room, and has since grown into a burgeoning and cooperative lab environment running two EEG systems.
Amidst our growth, we have learned several new techniques for gathering EEG (electroencephalogram) data, processing it, and interpreting it. Recently, we have incorporated EOG (electrooculogram) into our lab.
When you build a lab from scratch in a field that’s relatively new with no experts in your department, nothing is handed to you, so we have had to push ourselves to get everything to a field standard. After some time, we have achieved so much, but EOG was still not a part of our workflow.
Googling something like “how to use EOG” or “what to do with EOG” doesn’t help much, especially when everyone uses different software, hardware, or they are in different fields entirely, which changes their whole methodology and usage of various technologies.
So, though Google and Wikipedia can tell you what something is, the nitty gritty details of using something like EOG, say, checking compatibility with your other hardware, interpreting the data, etc., are left for you as a lab researcher to figure out. Sure, you can seek help from other more knowledgable experts, but at the end of the day, making it work for you is really up to you.
Some background: we use ANT Neuro for our hardware, which we pipe into MATLAB plugins called EEGLAB and ERPLAB that processes the raw data into numbers, which we then analyze using R. In neurolinguistics, we can use EEG to look at event related potentials (ERPs) which are indications of changes in neurophysiological activity in the brain with respect to some event. For example, say you see a series of Xs, like this–
X X X X X X X X
–displayed one at a time on a computer screen in front of you. You may become rather accustomed to seeing Xs, so something else may throw you off your game:
X X X X O X X X
Once this pattern is broken, your brain had to compute that indeed the pattern of Xs you were so used to seeing is now broken, which I guess can be upsetting for perhaps both you and the brain. This is a classic paradigm known as an Oddball Paradigm, but useful for us here to illustrate that behaviorally, we recognize this change, but at the neurophysiological level, this difference is also easily recognizable.
At the O, we would mark about a second before and after to set a window of time in the EEG data to focus on, and we would take every window of this O being presented to the participant and average them together. By doing this, we get a picture of what happens in the brain when this event is occurring. What we would find is that there is a change in electrical activity relative to a baseline condition, in this case, seeing Xs. These changes in electrical activity for us are evidence that the brain is doing some extra processing work. At the end of the day, we are interested in seeing where in the flow of processing information the brain is spending some extra effort.
For linguistic inquiry, we apply this kind of logic to sentences, where during some portion of sentence processing that has some kind of behavioral response (a slowdown in reading because there is an odd word or structure), we measure that part of the sentence using EEG.
In this endeavor of EEG work, there are many artifacts that can get in the way, such as moving around, blinking, greasy hair, etc., so we try our best to remedy most of these. For eye movements and blinks, EOG can be used to record electrophysiological data of these events, and eventually, we use this data and subtract it from the rest of the EEG data reducing overall noise.
Here’s how we went about including this extra data: first, we set up two sets of bipolar electrodes, one set above and below the eye or VEOG (vertical-electrooculogram), and one set to the sides of the eyes or HEOG (horizontal-elecrooculogram). Each set has two electrodes to act as kind of reference points so that when the eye moves from the center, we can use both electrodes to calculate where the eye is looking as well as the much noisier (with respect to EEG data) muscle movement of blinking. Here is a photo of one set of bipolar electrodes next to our amplifier and the electrode connections for 32 of the 64 available ports.
Once this data is recorded, we can use it to subtract blink and eye-movement artifacts that are read into the other electrodes on the cap.
The actual subtraction methodology we use is from a paper in 1983 from a group of folk Gratton, Coles, and Donchin (see Gratton, G., Coles, M. G., and Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalogr. Clin. Neurophysiol. 55, 468–484. doi: 10.1016/0013-4694(83)90135-9).
Basically, what it does is takes the EOG data and removes it from the other electrodes on the cap with a regression value based on how far away the other electrodes are. So, for electrodes rather close to the eyes like FPz, blinks and eye-movements are much more visible here, so there is a greater subtraction, but electrodes at the back of the head that rarely show blinks at all with have a much lesser regression.
Okay, so we have the methodology, we have the data, now what? How do we use this in EEGLAB or ERPLAB? A few options: one, does this software natively have it? No. Should we write it ourselves? We can, but let’s not reinvent the wheel and see if someone else has contributed. And, indeed, they have. If you would like to use this correction in your EEGLAB, check out this link here. Here is some data I collected on myself today that’s been filtered and processed–
Note that the highlighted region shows a good example of this correction smoothing out a negative deflection in the frontal poles and other electrodes.
Granted, this isn’t the only way to go about using EOG data and accounting for eye artifacts, many others utilize various methodologies that work for them. Some use independent component analyses (ICA), others use more complex forms of regression and correction, and some use methods that don’t require EOG data at all. But, at the end of the day, I’m glad we’ve included this kind of information in our workflow, because now our data is cleaner and our lab is happier.