In terms of something to track changed records, in SQL Server there is the 'rowversion' type that increments a global value for each row that is inserted or updated, so it is very easy to query which rows in a table contain new or updated data, simply by storing the last seen rowversion value and querying any rows of a higher value.
I understand this functionality isn't present in MySQL. Perhaps the most straightforward way of tracking new changes then, is to apply a trigger to each base table on the server that you are attempting to synchronise, which inserts the primary key of each inserted or updated record in the base table into a secondary audit table, and have an auto-increment primary key on the audit table itself.
Clients then query the audit table for the rows inserted since the last auto-increment key they saw, and then follow the links in the audit table to retrieve the relevant rows from the base table. And the audit table can be periodically pruned once you are satisfied that all clients are up to date (book-keeping which may entail yet another table on the server, consisting of one row per client, and maintained by each client, to track where they are last up to).
A similar approach can be used in the opposite direction from client to server. An audit table is prepared locally by each client, and these rows are periodically pushed to the server, then the local audit table cleared.
There is a caveat with this approach however. Access to the audit table on the server (whether for reading or writing) must be serialised, and the lock on the table must be taken before the stage at which any auto-increment key is reserved. The danger otherwise is of two concurrent transactions completing out-of-order (in terms of the sequence determined by the auto-increment keys which they have reserved) on the audit table, and a synchronisation process occuring between the two completions. This will then retrieve the row with the higher key, store that value as the last seen value, and then the earlier row will become visible but will never be synchronised (because its key falls earlier than the highest last-seen value already stored by the synchronisation which occured prior to the row becoming visible).
This is probably not a common occurrence in practice, because of the need for close timing, but the risk is present unless the synchronisation process is prevented from occurring whenever a transaction is occuring which has reserved a key value on the audit table but has not yet taken the locks on the table necessary to insert that value. You therefore have to manually take the lock first, before executing any step which will reserve a new auto-increment key.
The performance implications of this serialisation may be completely acceptable on a server that is only ever modestly loaded (at least in terms of activity on the particular tables being audited for synchronisation), but this sort of locking could cause a concern on a heavily loaded table with a lot of concurrent access.
Using timestamps to differentiate between new and existing records will pose much the same problems, except that there will cease to be any guarantee that the timestamps are unique, so that in principle (even once the locking concerns are dealt with) you will always have to query any rows that match the same as the last-seen timestamp (and potentially retrieve rows that have already been applied to the client at the time of the last synchronisation), rather than only those that are later.
And depending on how many rows are in the base table, you may want to index the timestamps for faster identification of the relevant rows, but then suffer a performance penalty on the maintenance of the index. Whereas the separate audit table stores links only to the relevant rows in the base table which are new, and also stores them in the desired ascending order of time.
What I'd be more concerned about in your scenario is how you resolve the (presumed) possibility of conflicts where multiple clients update a local copy of the same record, then attempt to push these changes to the server. Which then will win? And if a decision is automatically made, will either client be notified that a conflict occurred, so as to verify that the decision was the correct one?