Preceding the risks associated with ASI and perhaps even prior to AGI, I assume we are already experiencing amplified risks even with Weak AI, etc. being employed by motivated human bad actors. Even prior to the AI ability to act alone, human actors (not just those who are coerced as you suggest but those motivated by other factors) could be a powerful amplifier for AI risks (and vice versa). It is concerning to me to consider the abilities of ~3 billion monthly human users working in concert with Facebook services for example. This is of course exponentiated when the capabilities of the software portion of that system inch towards ASI. I wonder how you would update your threat matrix when including cyber-physical or cybernetic systems meant to relate to the full expanse of humans acting in coordination with AI.
Preceding the risks associated with ASI and perhaps even prior to AGI, I assume we are already experiencing amplified risks even with Weak AI, etc. being employed by motivated human bad actors. Even prior to the AI ability to act alone, human actors (not just those who are coerced as you suggest but those motivated by other factors) could be a powerful amplifier for AI risks (and vice versa). It is concerning to me to consider the abilities of ~3 billion monthly human users working in concert with Facebook services for example. This is of course exponentiated when the capabilities of the software portion of that system inch towards ASI. I wonder how you would update your threat matrix when including cyber-physical or cybernetic systems meant to relate to the full expanse of humans acting in coordination with AI.