Amazon recently announced SMTP Support for the Amazon Simple Email Service (SES) which is very cool. Now you can configure your server to send email through it regardless of what platform your site is built in (my previous post was only relevant to PHP servers) There are 3 main things you need to do to configure your Postfix server to relay email through SES: Verify a sender email address, create an IAM user for SMTP and configure your server to use SES.
Verify a sender email address
- In the SES section of the AWS Management Console, click on “Verified Senders”:
- Then click on the “Verify a New Sender” button:
- Enter the Sender’s Email Address and click “Submit”:
- Then you’ll see the confirmation message:
- Go to that email account and click on the link Amazon will email to you to confirm the address.
Create IAM Credentials
- In the SES section of the AWS Management Console, click on “SMTP Settings”:
- Click on the button “Create My SMTP Credentials”:
- Choose a User Name and click “Create”:
- Save the SMTP Username and SMTP Password that are displayed . We’ll need them when we’re configuring the server.
Configure the server
Now for the fun part. Here I assume you’re running Postfix as the MTA on your server.
- Install stunnel:
apt-get install stunnel
- Add these lines to
/etc/stunnel/stunnel.conf and make sure it starts properly (you may have to edit
/etc/default/stunnel so that it starts automatically on boot):
accept = 127.0.0.1:1125
client = yes
connect = email-smtp.us-east-1.amazonaws.com:465
- Add this line to
- Generate the hashfile with this command:
- Add this line to
127.0.0.1:1125 <your SMTP Username>:<your SMTP Password>
- Fix the permissions on
chown root:root /etc/postfix/password
chmod 600 /etc/postfix/password
- Generate the hashfile with this command:
- Add these lines to
sender_dependent_relayhost_maps = hash:/etc/postfix/sender_dependent_relayhost
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/password
- Load the new configuration with this command:
After setting it up, look closely at the mail logs on your server to verify that they are being delivered properly. As I found through testing, in certain misconfigurations your email will not be delivered and will not remain in the queue on the server. The mail logs are the only place that will indicate that delivery is failing.
If you need to add other senders in the future, edit
/etc/postfix/sender_dependent_relayhost accordingly then run:
The reason for using sender_dependent_relayhost is because you want to specify what email gets sent through SES. If you try to send all email from the server through SES, you’ll probably have some end up going into a black hole. When I was testing this previous to using sender_dependent_relayhost, I didn’t have my root@ email address verified and so emails ended up bouncing back, then bouncing into oblivion never to be seen again (because it would try to relay email to root@ through SES too.)
If you’re using PHP and wanting to check to make sure the incoming connections came over HTTPS, you are probably using the
The problem is, if your servers are behind a load balancer which handles SSL encryption for you, this method of checking won’t work. Fortunately, there are other headers added by the load balancer you can use to detect SSL. They are the
$headers["X-Forwarded-For"] == 188.8.131.52 (because
$_SERVER['REMOTE_ADDR'] is going to give you the load balancer’s IP address)
$headers["X-Forwarded-Port"] == 443
$headers["X-Forwarded-Proto"] == https
These headers should work with all loadbalancers, including Amazon’s ELB on EC2.
Note: if you want to setup SES in a way that scales much better and functions even with non-PHP sites, please read this more recent HowTo: How to configure your Postfix server to relay email through Amazon Simple Email Service (SES)
Here’s how you can start using Amazon’s new SES
(Simple Email Service) without having to actually implement it in the php of your website:
- Extract the files and create a new one named “aws-credentials” with your key data in it; for example:
- Verify an email address to use with SES
./ses-verify-email-address.pl -k ./aws-credentials -v firstname.lastname@example.org
- Check the email account for the address you’re verifying and click on the provided link.
echo "This is only a test." | ./ses-send-email.pl -k ./aws-credentials -s "test subject for email" -f email@example.com firstname.lastname@example.org(Note – Until you receive production access to Amazon SES, you can only send to addresses you have verified. You can request production access here.)
- Edit the sendmail_path config in your php.ini as follows:
sendmail_path = /path/to/ses-send-email.pl -k /path/to/aws-credentials -f email@example.com -r
- Restart/reload Apache and that’s it!
(Additional notes – The “From” address you set in your php.ini file will override any mail headers you set in php. Sending will fail if you try to set the “From” header to an unverified address or when setting the “Reply-To” header at all in php.)
Here is a php script you can use to update your website from your git repository. You can pass 2 parameters to it:
- “r” – revision you want checked out from git (r=head works also)
- “l” – number of log entries you want to view
For example, if I was running it on this site here is what each URL would do:
You need to make sure that the directory structure is owned by the HTTP daemon user (so that the files can be updated.) It is best to run it initially from the command line as that user on the server to make sure everything is working properly.
One word of caution; you should restrict access to who can run this script (maybe with HTTP-Auth over HTTPS) because the script isn’t perfect and you don’t want to let anyone make changes to your site. There are also certain security risks that are increased when you have your website files owned by the webserver user. It is recommended that you only use this script in a protected environment.
Servers die. People make mistakes. Solar flares, um, flare. There are many things that can cause you to lose your your data. Fortunately, there is a pretty easy way to protect yourself from data loss if you use MySQL.
My preferred solution is to store a copy on EC2 through replication. One big reason I like to replicate to EC2 is that it becomes a pretty easy warm-failover site. All of your database data will be there, to switch over you’ll just need to start up webservers or other systems required by your architecture and make a DNS change. If your datacenter became a smoking hole in the ground, you could be back up and running on EC2 in 15 minutes or less with proper planning.
No matter where your MySQL master server is hosted, you can replicate to an EC2 instance over the internet. Latency generally isn’t an issue when compared to the lag that may be introduced by the replication process itself. I typically see a max of 5-10 second replication lag during general use. That lag is due to the replication process being single-threaded (only one modification is made to the database at a time.)
Here are a few things to keep in mind when setting up replication:
- Use a separate EBS volume partition for your mysql data directory
- There is good replication documentation for MySQL
- Use SSL
- Set expire_logs_days to an acceptable value on the slave and server. The value of this setting will vary depending on the volume of data you send to the slave each day. Don’t make it so small that recovery with the binlogs will be difficult or impossible.
- Store your binlogs on the same partition as the mysql data directory. This simplifies the snapshot and recovery process.
Here’s a sample EBS snapshot perl script for MySQL that can be modified and used to create snapshots of the mysql data on the slave server:
Since this is a mysql slave server, you can create volume snapshots whenever you want without any impact on your master database. By default, AWS imposes a 500 volume snapshot limit. If you have that many snapshots, you’ll have to delete some before you will be able to create more.
With the periodic snapshots and binlogs, you can recover to any point in time. I’ve been able to recover from a “bad” query that unintentionally modified all rows in a table as well as accidentally dropped tables.
Can you replicate from multiple database servers to a single server? Yes, but a rule of replication is that a slave can only have one master. To make it possible for one server to be a slave to multiple master servers you need to run multiple mysql daemons. Each daemon runs with its own configuration and separate data directory. I’ve used this method to run 20 mysql slaves on a single host and I’m sure you could run many more than that.
Millcreek Systems is available to help you setup and maintain MySQL replication for you. Please contact us if you’d like to discuss our services further.