As of a few months ago, Terraform will fail 10% of the time while, apparently, pushing state to the backend (which is in S3). I'll have to cleanup the cruft left behind, run it again, and it'll pass. It was working fine for a couple of years before this started. The provider version hasn't changed. The environment hasn't changed. Any thoughts on what might be causing/exacerbating this?
module.task-definition.aws_ecs_task_definition.task-definition-default: Destroying... [id=workflow-api-production]
module.task-definition.aws_ecs_task_definition.task-definition-default: Destruction complete after 0s
module.task-definition.aws_ecs_task_definition.task-definition-default: Creating...
module.task-definition.aws_ecs_task_definition.task-definition-default: Creation complete after 0s [id=workflow-api-production]
module.load-balancer.module.service.aws_ecs_service.default: Modifying... [id=arn:aws:ecs:us-east-1:326764833890:service/internal-webserver-ssl/production-workflow-api]
module.load-balancer.module.service.aws_ecs_service.default: Modifications complete after 1s [id=arn:aws:ecs:us-east-1:326764833890:service/internal-webserver-ssl/production-workflow-api]
╷
│ Error: Failed to save state
│
│ Error saving state: failed to upload state: operation error S3: PutObject, failed to rewind transport stream for retry, request stream is not seekable
╵
╷
│ Error: Failed to persist state to backend
│
│ The error shown above has prevented Terraform from writing the updated state to the configured backend. To allow for recovery, the state has been written to the file "errored.tfstate" in the
│ current working directory.
│
│ Running "terraform apply" again at this point will create a forked state, making it harder to recover.
│
│ To retry writing this state, use the following command:
│ terraform state push errored.tfstate