Confidently Access Simple and Secure CloudDB (Mongo) in AWS

In this post, I tell you how I was able to connect to a CloudDB (AWS MongoDB equivalent) in a private subnet using System Security Manager Session Tunneling

The reason I started on this quest was some work I was doing for a long time client. The backend for their application is CloudDB (AWS’s MongoDB compatible offering in the cloud) place in a private subnet. 

Noodling the problem

That means that the only way to access it is via the private subnet that the database on. For the application that works perfectly. And if you jump onto a machine in that subnet you can run queries that touch the database.

That said, sometimes it is nice to use a GUI tool like MongoDB Compass. Moreover, I’ve updated all my hosts to use SSM sessions instead of SSH.

Using SSM ties access to normal AWS authentication and authorization. Because of that, you must have a role or identity that has access in order to connect.

CloudTrail and SSM log all of the activity for these session. Which adds the benefit of making it easy to see who did what and when.

SSM as SSH tunnel

Initially I had to figure out is how to fool these tools into thinking that an SSM tunnel was an SSH tunnel. SSM, as it turns out, has built-in tunneling, and the tools are unaware of what provides the connection.

To begin, log in using AWS SSO (ensure it’s set up in your ~/.aws/config file first). You could use any authentication method like an Okta federation to get your credentials.

export AWS_PROFILE=<profile name of the account your DB is in>
export AWS_DEFAULT_REGION=<AWS region your DB is in>

# Must be logged in for this to work
aws sso login

# Double check we are in the right account
aws sts get-caller-identity

If that all works, you’ll see the account ID and role that you get logged in with.

Digging the tunnel

Using those credentials, you need to set up the tunnel. You can go to the console and pull the DNS name for the CloudDB server access, and pull the certificate file that MongoDB uses to connect.

Next: use some CLI commands to pull the endpoint from AWS itself:

ENDPOINT-$( aws docdb describe-db-clusters --query "DBClusters[].Endpoint" --output text )

With only one cluster, the process was straightforward. For multiple clusters you may need some additional filters in the above command.

Connecting via an instance

The very next thing you’ll need is instance ID of the host you plan on using to tunnel through. I tagged the instance with Spot=true to make it easy to find. You could use other tags or attributes depending on your use case.

# find the instance ID based on Tag of Jump=true (also must be running)
INSTANCE_ID=$(aws ec2 describe-instances \
  --filter "Name=tag:Jump,Values=true" \
  --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" \
  --output text)

echo "Instance ID: ${INSTANCE_ID}"

The INSTANCE_ID variable now will contain the instance ID (assuming the instance is tagged and running).

The Command

Taking the DNS name for the DB that you found in the console(or grabbing the IP address if you prefer) and run the SSM tunnel command:

# create the port forwarding tunnel to the DocDB instance
aws ssm start-session \
  --target ${INSTANCE_ID} \
  --document-name AWS-StartPortForwardingSessionToRemoteHost \
  --parameters host="<db host name from console>",portNumber="27017",localPortNumber="27017"

Once that is done, you can use your favorite tool to connect using localhost instead of the DNS name of the DB.

… Except…

Database tools actually use the host name to verify the connection. The connection will fail if you connect with localhost instead of <db host name from console>

DB host hack

On most systems you can add an entry to your /etc/hosts file to make it pretend to be any host you want. This works for now: I leave the tunnel running, go update the /etc/hosts file by adding the <db host name from console>

# Host Database
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##	localhost <db host name from console>	broadcasthost
::1             localhost

With the hosts and proxy ready, you are ready to connect. Run the local app with the “real” DB name, and you are connected.

I haven’t yet figured out how to revert this DNS hack elegantly. There is a bit of code to add the hostname to the /etc/hosts file in my startup script. Cleanup of the hostname is an exercise for the future.


The integration of CloudTrail and SSM provided robust activity logs, ensuring accountability for user actions. I creatively used /etc/hosts for a temporary DNS hack, facilitating a smooth connection through localhost.

This journey offers a quick and effective solution for secure CloudDB access without traditional SSH.

Related Reading

Hi, I’m Rob Weaver