3
\$\begingroup\$

I'm not asking for help solving a problem but rather asking for help for possible ways to improve the speed of my program.

Essentially what this does is: Tracks market data by pulling the data from the public API provided by the devs. Compares the data with a pre-determined price I input, if its a good deal it does a POST request directly into the blockchain using the public API provided by the blockchain.

I'm competing with a fair bunch of other programs that do the same and sometimes I win, sometimes I lose and I wanna improve my winning chances by optimizing my code as much as possible. PS: Everything here is 100% legal and allowed, both APIs are public and given to us by the devs.

I start the threads this way: (I have multiple and each needs different variables)

 collection = 'xxxx' template = 154041 price = 1 x1 = threading.Thread(target=self.check2, args=(collection,template,price,)) x1.start() collection = 'yyyyy' schema = 'yyyyy' price = 2 x2 = threading.Thread(target=self.maxspeed, args=(collection,schema,price,)) x2.start() 

Essentially it starts function "check2" and "maxspeed" on different thread, they're essentially the same function just different endpoints of the REST API & different variables.

Then what it does is: (I get the info directly from the block chain right at the start because It's needed for the POST request at the end, I can use the same info for ~5-7 hours, hence why I use time.time() to track time elapsed since the last time I grabbed the info so that after 5 hours has passed, the loop will grab the info again instead of having to grab it every time before each POST request) I also use these requests at the start to load up the pool_connections(idk if this is how you use them) but essentially to make the requests faster.

def check2(self,collection,template,price): start = time.time() URL = f"https://xxxxxxx.com" s = requests.Session() s.mount('https://', HTTPAdapter(pool_connections=3)) headers = {'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.1 Chrome/56.0.2924.87 Safari/605.1.15"} s.headers.update(headers) test = s.get('https://get info from the blockchain url.com').json() ref_block_num, ref_block_prefix = get_tapos_info(test['last_irreversible_block_id']) chain_id = test['chain_id'] test2 = s.get('https://POST Transaction url.com') stopsale = 0 while 1: #### This is the if check to see if 5 hours have gone by so I get the renewed blockchain data if (((time.time() - start) / 60) / 60) > 5: test = s.get('https://xxxxxx').json() ref_block_num, ref_block_prefix = get_tapos_info(test['last_irreversible_block_id']) chain_id = test['chain_id'] start = time.time() print('resetting time / blockchain data') try: resp = s.get(URL).json() except: print(f'error in try with {collection} {template}') try: listing_price = resp['data'][0]['price']['amount'] sale_id = int(resp['data'][0]['sale_id']) except: print(f'error in {collection} {template}') continue if int(listing_price) <= int(price): if stopsale != sale_id: ### This if is to prevent spamming POST's for the same sale_id ### Now starts the Buying part, it will do a bunch of stuff with datastreams that even I don't fully understand myself but apparently it's needed for the blockchain. price2 = formatprice(listing_price,8) ### this is a small function I wrote to properly format the price because the price given by the API is kinda weirdly formatted, hard to explain tx = {Payload goes here} data = tx['actions'][0]['data'] ds = DataStream() ds.pack_name(data['from']) ds.pack_name(data['to']) ds.pack_asset(data['quantity']) ds.pack_string(data['memo']) data1 = tx['actions'][1]['data'] ds2 = DataStream() ds2.pack_name(data1['buyer']) ds2.pack_uint64(data1['sale_id']) ds2.pack_uint64(data1['intended_delphi_median']) ds2.pack_name(data1['taker_marketplace']) tx['actions'][0]['data'] = binascii.hexlify(ds.getvalue()).decode('utf-8') tx['actions'][1]['data'] = binascii.hexlify(ds2.getvalue()).decode('utf-8') tx.update({ "expiration": get_expiration(datetime.utcnow(), timedelta(minutes=15).total_seconds()), "ref_block_num": ref_block_num, "ref_block_prefix": ref_block_prefix, "max_net_usage_words": 0, "max_cpu_usage_ms": 0, "delay_sec": 0, "context_free_actions": [], "transaction_extensions": [], "context_free_data": [] }) auth = tx['actions'][0]['authorization'][0] private_key = "xxxxxxx" tx_id, tx = sign_tx( chain_id, tx, private_key, ) ds = DataStream() ds.pack_transaction(tx) packed_trx = binascii.hexlify(ds.getvalue()).decode('utf-8') tx = build_push_transaction_body(tx['signatures'][0], packed_trx) res = s.post('https://POST TRANSACTION.com', json=tx) result = res.json() print(result) try: sqlite_insert_query = f"""INSERT INTO bought (saleid, price) VALUES ('{sale_id}','{price2}')""" cursor.execute(sqlite_insert_query) dblist.commit() except: print('error inserting into db lul') print('sale ID:') print(sale_id) print(price2) stopsale = sale_id 

Is there anything you guys would modify to improve the speed? I'm still getting outpaced by some other people occasionally.

I also live very close to the API hosting actually so that's a bonus I have over others already.

\$\endgroup\$
2
  • \$\begingroup\$Is the slowest part resp = s.get(URL).json()? If so, does URL have any options to perhaps reduce the response payload?\$\endgroup\$
    – JonSG
    CommentedJun 11, 2021 at 15:30
  • 1
    \$\begingroup\$the slowest part is the request yea, I changed and now I'm using orjson instead of default python .json() parser which improved runtime by 0,3 every 5 loops which is significant but I also wanted to use pycurl instead of requests as it's a lot faster but having a hard time getting it to work. And nope, payload has to be the way it is :l\$\endgroup\$CommentedJun 11, 2021 at 15:37

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.