3

What's the best practice for storing database credentials for an auto scaling environment, yet also requiring the ability to update database information?

For instance I have an auto-scaling site on AWS. At present there is only one database, but at some point new database instances are going to be added (either for data sharding or readonlys)

I'm trying to figure out the best way to store database credentials in a way that also makes them easily updated. Although I've mentioned AWS here ideally i'd like something that is platform agnostic if possible.

It's obviously wrong to store the database credentials in source control so my initial thoughts are along the following (but even this seems wrong)

Use the same username and password on all database servers (there could be different users still depending if it's a write or readonly DB, but all users would be on the servers that need them). Then somehow store these securely on a webserver image. Though I've also read some posts arguing that environment variables should also not be used so wondering how to do this.

A separate config file could potentially live in source control that merely details the addresses and names of the database servers though I also think this should be secret. The issue then arises about getting an updated list of database servers to all the webservers easily.

The problem I'm facing is how to update x number of servers with details of y databases in a secure manner when x and y can both change. y would in theory be more stable than x if x was auto scaling. But at the same time when it does change it would have a greater impact.

This will most likely be on a Linux (Ubuntu) machine. I haven't listed a specific scripting language though as it would be good if there are solutions that can be used more widely

Thanks

    1 Answer 1

    1

    Not really a best practice answer, but your use case sounds a lot like the common problem adressed in configuration management. Configuration management applications like Chef or Puppet provide you with the means to efficiently set up and configure software on clustered systems.

    Usually you set up a master system that installs and configures all your other systems like application servers and database instances. The master updates configurations every n minutes based on rules like:

    1. When the number of application servers increased, deploy a configuration file containing all the databases credentials and start it up.
    2. When the number of database instances decreases, update all application servers configuration files and restart them one by one.
    3. When the number of database instances increases, get credentials for the new instances from the cloud service, update all application servers configuration files and restart them one by one.

    You might also consider having the configuration management application provision your databases. In this case you could go as far as having random passwords generated whenever a new database is required, database users generated automatically and credentials published to all application servers. Strictly speaking, no human would have to get in touch with these database passwords anymore!

    Finally let's state some security considerations:

    • Obviously the master system is a valuable target, and should be secured reasonably. E.g. it should be placed in a MZ (militarized zone) with strongly regulated network access.
    • The traffic between master and the other systems is usually encrypted to prevent leakage of credentials and other sensitive information.

      You must log in to answer this question.

      Start asking to get answers

      Find the answer to your question by asking.

      Ask question

      Explore related questions

      See similar questions with these tags.