re: "I honestly don’t know. My understanding is that it is not considered psychologically healthy to suppress things, to skip over stages of grief, to pretend that things are fine when they are not fine."
A lot of psychologically healthy behavior looks strange or neurotic when pointed out in isolation like this but it's usually hyperfixat…
re: "I honestly don’t know. My understanding is that it is not considered psychologically healthy to suppress things, to skip over stages of grief, to pretend that things are fine when they are not fine."
A lot of psychologically healthy behavior looks strange or neurotic when pointed out in isolation like this but it's usually hyperfixation and un-distract-able rumination that define pathological mood disorders. I would typically describe having the ability to stop noticing something is bothering you (even when it reasonably should) is a tool in one's mental toolbox that, say, someone with clinical depression might be lacking.
re: Ezra Newman speculation; I've always similarly wondered how possible it is that public discussion of AI alignment would constitute a body of material that an AI, potentially in the midst of taking off, would come across and model in a way that might influence it to consider changing course in that direction; not in the use-an-AI-to-solve-AI-alignment sense but in the oh-they're-talking-about-me sense.
re: "I honestly don’t know. My understanding is that it is not considered psychologically healthy to suppress things, to skip over stages of grief, to pretend that things are fine when they are not fine."
A lot of psychologically healthy behavior looks strange or neurotic when pointed out in isolation like this but it's usually hyperfixation and un-distract-able rumination that define pathological mood disorders. I would typically describe having the ability to stop noticing something is bothering you (even when it reasonably should) is a tool in one's mental toolbox that, say, someone with clinical depression might be lacking.
re: Ezra Newman speculation; I've always similarly wondered how possible it is that public discussion of AI alignment would constitute a body of material that an AI, potentially in the midst of taking off, would come across and model in a way that might influence it to consider changing course in that direction; not in the use-an-AI-to-solve-AI-alignment sense but in the oh-they're-talking-about-me sense.