We were familiar with Microsoft Azure, however their services such as Redis did not live up to our expectations. Finally we decided that we’d give Amazon Web Services a try and agreed on two things. Firstly everything ought to be run as a service. Secondly we decided to start from very low instances in order to avoid spending too much money in the beginning phase after which we would switch to more powerful ones.
After successful configuration of Magento services (RDS, Redis, S3 and Elasticsearch) we created tiny node instance with 20GB SSD drive, 2 CPU and 2 GB of RAM. Nodes are basically for PHP-FPM and Nginx only. Magento’s media are stored in S3 bucket. Client’s e-commerce built on Magento 2 has been developed by Macopedia from some time now. After importing database from test environment we did the first artifacts deployment. Frankly, I was pleasantly surprised when I hit the admin url and after cache warmup and successful login I saw admin panel loading pretty fast. Everything seemed to be fine.
Even though backend worked faster than we had expected, basic products and category operation such as insert or edit were pretty slow. Nevertheless we found out that when more than two people were working with backend in the same time: saving products or adding products’ images, we have deadlocks on database tables. It was about time to tune some things up.
Firstly, RDS instance. We have upgraded it to more powerful with 16 GB of RAM memory and 4 CPU. I found it interesting that when you upgrade RDS instance the database settings like innodb_buffer_pool are dynamically changed as well. You don’t need to worry about crucial settings while changing the instance. RDS will take care of it for you. It’s worth mentioning that after upgrading RDS, write count of IOPS increased although we didn’t change SSD type or find any deadlocks information in logs.
After dealing with RDS, we focused on application node. We noticed that CPU utilization during pick hours was more than 80%. We switched from micro instance to medium one (4 CPU, 4GB of RAM, 20GB of SSD) and added another application node for more redundancy. As a result, multiple backend users are able to work at the same time including heavy products or category operations with no delays or deadlock. Frontend can handle up to 70 users with no further issues whatsoever. At least, that’s how many we have counted so far.
AWS console is awesome! It is user-friendly, easy to explore and very intuitive. Even if you’re not familiar with its nooks and crannies after couple hours you’ll get used to it and feel like you have been working with it for years.
Setting up second application node based on the first one - honestly could not be easier. Few clicks and the job is done. What is more, such operations probably based on instance size don’t take that much time.
Changing database advanced settings is simple as well. Settings grid is very clear, options are intuitive. Live operation status bar shows you what is happening at the moment when cluster applies settings and reboots.
Free charts! For almost every instance and service basic charts are available. Of course they cannot be as advanced and precise as the New Relic ones or backfire. However, you may always decide to use paid analytics tools with Amazon.
I’m aware that there are some things need to be done on our side like multi-node deployment, self-acting node scalability or enabling Amazon CloudFront. Still I am positive that choosing AWS was the right way to go. Scalable, flexible and reliable infrastructure is now a must have for every e-commerce client in the world. If you added a well prepared monitoring to such infrastructure you can be sure that your shop availability is as high as possible.