Mongo DB replication - AWS

Soldato
Joined
18 May 2010
Posts
22,294
Location
London
Hoping some one has some thoughts on this little issue we have at work.

We have a mongodb that stores state information for our app in live.

When we promote staging to live we need to do a sync between live to staging of this database to make sure the most up-to-date state is synced before we do a promotion of staging to live.

Currently it runs as a python script and can take up to 8hrs for the sync to complete.

This is in an AWS EC2 env.

Has anyone got any thoughts how we can leverage the AWS cloud to make this faster?

I get it might be difficult to suggest based on this limited info but any ideas are welcome.

Randomly I was watching this:


Not sure it's a good fit for what we are trying to achieve.

I've suggested an EFS share between the two dbs. But the concern is performance and cost.

Taking a snapshot of the ec2 or of the db might not work either due to size.

But I imagine these ideas haven't been fully explored yet.

I couldn't tell you the size of the db but I imagine it is large hence why it takes 8hrs to sync
 
Man of Honour
Joined
19 Oct 2002
Posts
29,508
Location
Surrey
I'm not an expert in this area so I probably can't help much. But it interests me so I'll as a few questions which might help others answer it.

Is your python script literally reading every record from the live database and then performing an insert into the staging database?

How often do you need this replication to run? Is it nightly or only occasionally?

Does the live database have to remain online and in use while the replication takes place?
 
Soldato
Joined
3 Jun 2005
Posts
3,046
Location
The South
I've use Mongo (very) briefly but if it's two instances, can you not use copydatabase/clonedatabase?
Similarly, i'm sure you can copy the actual DB files from one instance to another and it'll restore the data; never tried it personally though.

Edit - Quick Google brings up this StackOverflow with some info - https://stackoverflow.com/questions...orm-one-time-db-sync-to-another-db-in-mongodb

And it looks like copydatabase/clonedatabase have be depreciated for mongodump and mongorestore.
 
Soldato
OP
Joined
18 May 2010
Posts
22,294
Location
London
I'm not an expert in this area so I probably can't help much. But it interests me so I'll as a few questions which might help others answer it.

1. Is your python script literally reading every record from the live database and then performing an insert into the staging database?

2. How often do you need this replication to run? Is it nightly or only occasionally?

3. Does the live database have to remain online and in use while the replication takes place?

---

1. Not sure to be honest.

2. The replication runs when we deploy staging before promoting it to live. The staging db needs to be in sync with prod before we do a switch over otherwise the state of the application gets lost.

3. Yes, until we promote staging to live.

My idea was to use an EFS share as the filesystem for both the prod and staging mongo (efs is basically nfs in the cloud)

The idea is that if they are both sharing the same filesystem already there is nothing to sync.

The issue is we would have to use the most performant EFS type and this could end up very expensive.
 
Soldato
OP
Joined
18 May 2010
Posts
22,294
Location
London
I've use Mongo (very) briefly but if it's two instances, can you not use copydatabase/clonedatabase?
Similarly, i'm sure you can copy the actual DB files from one instance to another and it'll restore the data; never tried it personally though.

Edit - Quick Google brings up this StackOverflow with some info - https://stackoverflow.com/questions...orm-one-time-db-sync-to-another-db-in-mongodb

And it looks like copydatabase/clonedatabase have be depreciated for mongodump and mongorestore.

I've used mongodump and mongo restore before.

It's basically what the python script is doing. But it takes 8hrs currently. We want to make it faster.
 
Associate
Joined
10 Nov 2013
Posts
1,804
I don't use mongo myself, but a couple of general suggestions/questions:
  • What about having 2 instances with master/slave or master/master replication so both instances are constantly in sync without this big bang approach?
  • You say it's the live application data - is it genuinely all 'live' state, or does the data also include historic data and/or report data that is not changing? Is it possible to split this up into separate tasks?
 
Soldato
Joined
3 Jun 2005
Posts
3,046
Location
The South
Have you thrown any monitoring on to your instances to find out where the bottleneck is?
And what's your current EBS/storage type?

From reading around, it seems mongodump/mongorestore is IOPS and memory bound so that could be one place to start.
This post on StackOverflow might help with improving mongodump if you are using that - https://stackoverflow.com/questions/28017653/how-to-speed-up-mongodump-dump-not-finishing.

Saying that, Mongo does support Master-Slave Replication (https://docs.mongodb.com/manual/replication/) which might be a better solution all around and they also mention doing FS snapshots for moving data (https://docs.mongodb.com/manual/tutorial/backup-with-filesystem-snapshots/).

My idea was to use an EFS share as the filesystem for both the prod and staging mongo (efs is basically nfs in the cloud)

What's the advantage of doing this? As surely that negates the need for two instances as, essentially, staging and production would be "one"?
 
Soldato
OP
Joined
18 May 2010
Posts
22,294
Location
London
Have you thrown any monitoring on to your instances to find out where the bottleneck is?
And what's your current EBS/storage type?

From reading around, it seems mongodump/mongorestore is IOPS and memory bound so that could be one place to start.
This post on StackOverflow might help with improving mongodump if you are using that - https://stackoverflow.com/questions/28017653/how-to-speed-up-mongodump-dump-not-finishing.

Saying that, Mongo does support Master-Slave Replication (https://docs.mongodb.com/manual/replication/) which might be a better solution all around and they also mention doing FS snapshots for moving data (https://docs.mongodb.com/manual/tutorial/backup-with-filesystem-snapshots/).



What's the advantage of doing this? As surely that negates the need for two instances as, essentially, staging and production would be "one"?

Need to investigate what the bottle neck is.

I believe we are using GP3 but I could be wrong as the volume type for the mongodb.

I will have a look if maybe changing it to IO2 and using provisioned IOPs might help.

My idea regarding using EFS is because EFS is basically a mounted filesystem between the two instances when ever we bring up a new staging instance it will automatically have the most up-to-date production files already accessible so there wont be any need to do any sync.

The disadvantage is that EFS is at least three times as expensive as an EBS volume.

And maybe even with the most performant type of EFS not as fast as an IO2 EBS volume.
 
Back
Top Bottom