0

I'm developing a trading platform using Django, where users can publish trading signals with specific stop losses. I'm facing a challenge in implementing a real-time feature to automatically mark signals as 'failed' when the cryptocurrency price hits the designated stop losses.

Here's the specific scenario: Assume I have 1,000 signals for BTCUSDT. If the BTC price reaches $20,000, I must instantly mark 700 of these signals as 'failed' based on their stop-loss criteria.

I seek advice on the best approach to achieve this in real-time within a Django framework. (I am willing to use any technologies that are suitable for my specific scenario.)

2
  • Well, at some point there's going to be a loop that checks each signal against the know state, and see which ones match. Doing that efficiently can be tricky though, because these signals might have state (e.g. price changed by $X within 1 hour). In the past, I've experimented with stuff like the Esper EPL to write such temporal queries to create events. Normal DBs contain data and you execute queries. With EPL, the DB contains temporal queries and you stream in data, and get notified when queries match.
    – amon
    CommentedFeb 17, 2024 at 8:31
  • Yes, that's what I need, do you know any other open-source or more used databases that have this kind of feature? EPL doesn't seem to be friendly with Python. I was thinking maybe an infinite loop and some indexing for the signals that need to be updated, then the comparison of the current price (whenever it's updated) and the stop loss price, trigger some other tasks to do the required changes. @amonCommentedFeb 17, 2024 at 10:52

1 Answer 1

2

instantly mark 700 of these signals

It's unclear what the verb "mark" means here. I will assume it means "run some arbitrary function".

You could use sortedcontainers -- it's C code on the inside. Store (price, id) tuples in a SortedList. Or use those as keys in a SortedDict. The data structure lets you quickly find matching prices, in O(log N) time.

Perhaps humans tend to choose round numbers for limit prices. Then you may want a sorted dict to map price to a list of IDs.

Your problem is a good match for an RDBMS table, with an index on the price column. I will note in passing that sqlite can use either filesystem or memory for backing store.

Your problem is a good match for kafka consumer(s) that maintain a SortedDict (or shards of such a dict), listen for price tick messages, and take action upon finding matches.

Suppose you have K shards on K servers. Pick some small discretization interval intvl. Map a price to a shard using int(price / intvl) % K. Now, instead of broadcasting a given tick to K servers, you can unicast it to the single responsible server. That way, during a burst of rapid price movements spanning more than one interval, you are likely to keep a bunch of servers busy doing useful work (rather than filtering messages that don't require an action from them).

8
  • Thanks, great ideas, Lets just say i want to do it on one server. and I need to update my PostgreSQL data in case of changes, now I have the current price of sth, and I need to compare that price to like 30,000 different numbers, then update the records, I'm more worried about this comparison, is there any way that I can efficiently do this? And also the price can change every second, so a lot of processing.CommentedFeb 18, 2024 at 6:33
  • You said we have a PostgreSQL table containing diverse prices. I assume we have an index on each price column. You go on to say "I need to compare that price to like N = 30,000 different numbers". I reject the premise. The index takes you, in O(log N) time, right to the place where the numbers of interest are. The whole point of an RDBMS index is you ignore nearly all of the stored numbers, focusing on just the ones of interest. // I usually choose postgres for a production setting. I really like "sqlite+pysqlite:///:memory:" for unit tests. I hear good things about pg-mem. Go benchmark!
    – J_H
    CommentedFeb 18, 2024 at 16:55
  • I feel like we are not on the same page here, so I have only ONE number, and I have 30,000 numbers on my PostgreSQL records, and all the numbers are float numbers, so does the indexing make any noticable differences since it's numbers? and as I said I'm more worried about the comparison on the scale.CommentedFeb 18, 2024 at 17:12
  • I'm thinking you have not benchmarked your PoC yet. (Also, we don't use IEEE-754 float for prices, we use scaled integers.) The PoC code issues SELECT * FROM accounts WHERE price = 1234; and it immediately gets back zero or more rows. This is true whether SELECT COUNT(*) FROM accounts; gives an answer of 30,000 or three billion. The interesting thing for performance is does that WHERE clause return one row, or a lot of matching rows? We call this selectivity, and it really matters, whether we're using an RDBMS or any other technology. For lots of matches, you need to do lots of work.
    – J_H
    CommentedFeb 18, 2024 at 17:19
  • Yes your right, Awwwh I did not know that, But the problem is my prices can be from no decimal value to 8, so I should use a default 10^8 for all my prices(for scaled integers)??? doesn't it make the processes harder? and for the second part, yes it can return many rows: SELECT * FROM signals WHERE stoploss > 23510.1533; then need to update them all. Are you suggesting doing it with Postgresql update method and seeing if the results are ok? what if not?CommentedFeb 18, 2024 at 17:37

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.