I have a 9.2GB file which I want to transfer into my AWS t2.small instance for backup purposes. When I start scp, it copies the file at around 3.4MB per second, which results in about 45 minutes expected transfer time.
Some time down the track the instance always locks up. I cannot type anything anymore in terminal windows, websites stall (its a web server) and I can't connect to it. Rebooting the instance solves the problem.
I investigated EBS limits: I have 2 RAID10 200GB gp2 disks attached. From this documentation I cannot see that I exceed IOPS or throughput for the disk. I also checked bandwidth, but cannot see any information on t2 instances in there. Finally I looked at CPU credits, but presumably it should not completely stall?
This is a one off transfer, so I'm looking to get an idea of how much I have to slow down the transfer to make it happen safely. At the same time I'd like to get an idea of limits for management of this web server.
If you want to find out what the problem is then you should install some monitoring or you can also make several connections to the system and run utilities like
top
,vmstat
,iostat
,free
etc (use watch(1) if needed) to get a view of what is happening to the system resources. Gather data and then apply Scientific Method - it's the only way to be sure.If you just want to transfer the file then try using split to chunk the file up and transfer each chunk separately. you can then use cat to assemble the chunks back into the whole file again.
One possibility is the file system cache. Typically with large amounts of data copy, file system cache can use up all available memory (t2.small only has 2GB), resulting in swapping, which might cause the system to become unresponsive. Not sure if there is a way to bypass file system cache with scp though.