> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
If we consider the function of a market to be to arrive at prices that lead to the optimal allocation of the goods sold on that market, intuitively it would seem that there should be a limit on how fast trades need to propagate to achieve that, and the limit would be tied to how fast new information relevant to the producers and consumers of those goods comes out.
I don't think I'm expressing this well but the idea is that prices of goods should be tied to things that actually affect those goods. That's generally going to be real world news.
If you turn up trading speed much past the speed necessary to deal with that I'd expect that you could end up with the market reacting to itself. Kind of like when you turn an amplifier up to much and start getting distortion and even feedback.
Broadly speaking, yes. Turning down liquidity increases spreads which affects which sorts of companies are able to raise what sorts of capital in those markets.
The paradox of HFT is that it's much smaller and more efficient than the slower, manpower-heavy Wall Street industry it replaced. It's just weird, which makes it easy to demonise in popular politics.
Is more liquidity needed? Yes, we have drastically reduced spreads these days.
Markets facilitate the buying and selling of securities, providing a regulated platform for companies to raise capital and for investors to trade assets based on supply and demand. Reducing spreads is optimal for everyone. Your making up some kind of pie in the sky idea of how markets should exist. The folks doing HFT or other type of flat at the end of day shops do not have the capital to move prices as much as you would like to think. Even if they did cause some large movement in the stock, there is a good chance there is a larger fish ready to take the other side.
You're missing the point, news never affect prices directly. News generate excess (compared with current price) supply or demand, which is the primary cause for price changes. In a certain way, it's always "market reacting to itself".
Note that American exchanges open and close with a batched cross. This hybrid approach is why most objections to intraday continuous trading is misplaced.
If you're talking about something like having an auction (per security) every N seconds, I don't see how that addresses the underlying issue, which is how to determine order priority.
If you have a bunch of orders at the same price on the same side, and an order comes in from the other side that crosses those orders (or there is an auction and there are orders on the other side which cross), how do you decide which of the resting orders at the same price should be filled first?
The most common way is that the first order to arrive at the exchange at that price gets filled first, and for that reason being fast is inherently advantageous.
If you're doing batches to reduce the advantage of being fast, you'd have to treat all orders that come in during a batch tick as simultaneous.
Resting orders from previous batches could have priority, if you want. You'd probably end up doing something with assignment of equal priority orders that looks like option assignment, basically random selection of shares among the pool of orders.
Personally, I'd fill unconditional market orders first, then market all or nothing (if fillable), then sort limit orders by price and from within limit orders of the same price, unconditional first, then all or nothing, then all or nothing + fill or kill.
I don't know if I would assign shares proportional to orders or to shares in orders. Probably shares in orders. Might be gamed, but putting in a really big order because you want to capture a couple shares is risky.
You could partially fulfil both resting orders, weighted by their (remaining) order size.
You might get "games" around people oversizing orders to try to get more "weight" to their orders, but that would be inefficient behaviour that could in turn be exploited, so people would still be incentivised to keep their orders honest.
> How about along a randomized delay (0-T time) to each order?
This is the sort of good idea that just entrenches the algos. (Former algorithmic derivatives trader.)
For small orders, these delays make no difference. For a big order, however, it could be disastrously embarassing. So now, instead of that fund's trader feeling comfortable directly submitting their trade using off-the-shelf execution algos, they'll route it to an HFT who can chunk it into itty bity orders diarrhea'd through to average out the randomness of those delays.
So now I probabilistically spam a ton of different orders to on average get my desired fill...
This just turns it into a "whoever is best at DoS'ing the exchange" game.
As the orderbook fills with competitor orders it makes sense for yourself to also spam orders so each of your orders maintains the same probability of being filled
I've argued in the past that we should have batch settlements every 30 seconds, instead of in real time. We don't really need microsecond based skimming/front running.
I've read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations.
I'm with you. Every 30 seconds. Cap the power of connection speed in trading. Trading should be based on the value of the item being traded, not on how short the fiber run is.
> read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations
What about an empirical argument? Microsecond trading reduces spreads and decreases volatility. It looks useless, so people try to regulate it away, and every time they do spreads widen and trading firms' and banks' profits fatten.
> Every 30 seconds. Cap the power of connection speed in trading
I'd go back to Wall Street if this happened: it would make market making profitable again.
CLOB's force market participants to compete on pricing (which is only indirectly related to latency, since you can quote tighter if you know your orders won't get picked off by other, faster, traders)
Taiwan used to have Batching style auction and it ultimately led to worse prices: https://focus.world-exchanges.org/articles/citadel-trading-a...
> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
so now the race is to get the order in (or out) @ 29.999999985 seconds or 15nS before the batch deadline. Interesting twist on the game. Unlikely to change who wins it, could it be worse for retail punters?
We need to kill "front running" as a criticism of low-latency algo trding with fire. It's garbage.
Front running is highly illegal and is where a broker knows a client is going to do a big trade due to inside information and trades on the account of others (themselves, typically) to exploit that inside information. It's a straight up cheat.
Inferring from market data alone which way a price will move is legal, honest, been attempted since forever and absolutely fine. Also very, very difficult. Anyone who can do it makes the market more efficient, reduces the money available by doing it (which goes into investors pockets through tighter spreads) and really earns their money. You don't have to like them if you don't want to but it's worlds apart from front running using inside information.
Where did algo trading profit come from? Won by being more competitive from brokers profit with a good chunk of that broker profit going to investors. Spreads are tighter.
Where are the clients' yachts? Well tech did something about the some of the broker ripoffs earning their yachts - which puts money in your pocket.
Batching can greatly lower the returns to speed, which would be sufficient to get participants to invest less in speed. It doesn't need to reduce the returns to speed to 0, and indeed reducing the returns to speed to 0 is sort of an incoherent idea to begin with.
HFT is still a massive thing in volume weighted (we call them pro-rata) markets, and it's even more toxic for retail as lots of people submit large orders that are unlikely to get filled immediately (which retail doesn't have the $ to do) to secure a bigger share of the pie
30 seconds seems reasonable. Don't the markets themselves make a fair amount of money off of providing fast access to the HFTs? Is that the primary perverse incentive?
Certainly systems exist, it was mostly a rhetorical question though. People love to say just run batches every N without diving into the complexity that exists. All they would accomplish by creating batches is wider bid/ask spreads.
There are cases to be made that you get tighter spreads.
The larger the time interval the larger the risk on pricing. If I am selling and it’s a large time to trade I am going to probably want to sell it for a higher price. The same goes on the bid.
Skywave has a point, they were through regulatory oversight to get their microwave working whereas these other firms went behind the FCC’s back and profited by not doing so. The fine is likely a lot lower than the profits they made so what incentive would future companies have to go through the proper channels?
> Experimental licences have more bandwidth than commercial licences, allow for “frequency hopping”
I think there is another angle to this where the modulation scheme makes a lot of the difference. If you figure out a way to send market information below the noise floor (i.e., spread spectrum schemes) how would anyone even know what you are doing?
If I was operating an HFT firm I would be all-in on the "ask for forgiveness, not for permission" angle, because the politics around my business are really nasty.
> I would be all-in on the "ask for forgiveness, not for permission" angle, because the politics around my business are really nasty
Securities are one of the few parts of the American economy regulated like a European sector. Complain-investigate regimes. Various agencies at multiple levels who can ban you from industry and fine you. The rational response is to turn risk tolerance down, not up.
He fell off the rails when Trump was running for office. I remember being really disappointed to see the trading observations (and his own product advertisement) replaced by political rants.
Radio waves travel at nearly the speed of light, whereas light in an fiber optic cable travels at ~67% of the speed of light due to the refractive index of glass.
In a vacuum, electro-magnetic waves travel at a speed of 3.336 microseconds (μs) per kilometer (km). Through the air, that speed is a tiny fraction slower, clocking in at 3.337 μs per km, while through a fiber-optic cable it takes 4.937 μs to travel one kilometer – this means that microwave transport is actually 48% faster than fiber-optic, all other things being equal.
I worked for three years designing custom low-latency point-to-point microwave radios for HFT for this very reason. They didn't need very high bandwidths (their long-haul network was less than 200 Mbit, whereas in New York/New Jersey we had about 5 Gbps because the hops were much shorter and they had licenses for more RF bandwidth at a higher frequency).
At those time scales, the difference is so large, it was incredible what they were willing to pay to build these networks!
I somewhat regret not specialising in RF/comms in my EE degree - this side of HFT sounds like a fascinating line of work (Trading at the Speed of Light was a great read).
I doubt there's much here that's cutting edge. Any digital processing that's done in typical radio's to correct for channel impairments is avoided as it just adds latency. Meanwhile LTE is using as many digital techniques as possible to maximize bandwidth (MIMO, HARQ, OFMDA)
Haha, you got us :) - in terms of the digital side yes, kind of. We’d even try to not have any digital in the path if possible on some hops! We did have things like LDPC (and different FEC on control packets) but it was definitely not as complex as LTE or newer cellular or WiFi standards. But what was avoided digitally meant far more work going into the analogue side to improve SNR, dynamic range, NPR etc. through the signal chain.
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
The immediate allure of hollow-core fibers is that light travels through the air inside them at 300,000 km-per-second, 50 percent faster than the 200,000 km-per-second in solid glass, cutting latency in communications. Last year, euNetworks installed the world’s first commercial hollow-core cable from Lumenisity, a commercial spinoff of Southampton, to carry traffic to the London Stock Exchange. This year, Comcast installed a 40-km hybrid cable, including both hollow-core and solid-core fiber, in Philadelphia, the first in North America. Hollow-core fiber also looks good for delivering high laser power over longer distances for precision machining and other applications.
Yes, funnily enough Microsofts reason was not HFT but AI. Essentially inter-datacentre training is limited by latency between the datacentres.
Generally they want to build the datacentres close to metro areas, by using hollow core fibre the radius of where to place the data centres has essentially increased by 3/2. This significantly reduces land acquisition costs, and supposedly MS has already made back the acquisition cost for Lumenisity, through those savings.
That feels somewhat implausible. I assume a Microsoft sized data center starts at over $100 million. Moving the footprint X miles away might be cheaper, but is probably a drop in the bucket given everything else required for a build out. I would further assume that they were already some distance away from the top tier expensive real estate to accommodate the size of the facility.
Its reality. Its generally about site and infra access, including power and fiber paths. The bigger providers (eg AWS) simply dont have more feasible sites that are within a few ms of the existing region DCs. Expect to see more infrastructure like “local zones” or AZs that are tens of ms away from the rest of the region.
By definition, it does, because the maximum speed is qualified by "the speed of light in a vacuum", so the speed of light [in other media] is simply a function of how much the medium slows it down, yet it is still the speed of light. Funny how that works!
No, I doubt it. At the same time, OAI and Anthropic both hire waaay less people straight from undergrad whereas Jane Street (and similar) are a lot more realistic, and it's not like the pay is bad.
> Would any top grad nowadays go to Jane Street over OpenAI or Anthropic?
You’re measuring which part of the economy pays math majors best. If I had to trace the centre of gravity of my top former colleagues, it was banks and hedge funds (never Jane); Uber; some leakage to crypto, they never recovered; now AI.
https://archive.ph/2vQm6
Other than seemingly perverse incentives, is there a good reason not to quantize trading time?
Taiwan Stock Exchange used to have quantized trading times (read "frequent batch auction"), but it led to worse price discovery and a bigger bid ask spread: https://focus.world-exchanges.org/articles/citadel-trading-a...
> Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
Is that better liquidity, etc., actually needed?
If we consider the function of a market to be to arrive at prices that lead to the optimal allocation of the goods sold on that market, intuitively it would seem that there should be a limit on how fast trades need to propagate to achieve that, and the limit would be tied to how fast new information relevant to the producers and consumers of those goods comes out.
I don't think I'm expressing this well but the idea is that prices of goods should be tied to things that actually affect those goods. That's generally going to be real world news.
If you turn up trading speed much past the speed necessary to deal with that I'd expect that you could end up with the market reacting to itself. Kind of like when you turn an amplifier up to much and start getting distortion and even feedback.
> Is that better liquidity, etc., actually needed
Broadly speaking, yes. Turning down liquidity increases spreads which affects which sorts of companies are able to raise what sorts of capital in those markets.
The paradox of HFT is that it's much smaller and more efficient than the slower, manpower-heavy Wall Street industry it replaced. It's just weird, which makes it easy to demonise in popular politics.
Is more liquidity needed? Yes, we have drastically reduced spreads these days.
Markets facilitate the buying and selling of securities, providing a regulated platform for companies to raise capital and for investors to trade assets based on supply and demand. Reducing spreads is optimal for everyone. Your making up some kind of pie in the sky idea of how markets should exist. The folks doing HFT or other type of flat at the end of day shops do not have the capital to move prices as much as you would like to think. Even if they did cause some large movement in the stock, there is a good chance there is a larger fish ready to take the other side.
You're missing the point, news never affect prices directly. News generate excess (compared with current price) supply or demand, which is the primary cause for price changes. In a certain way, it's always "market reacting to itself".
Thank you, it is nice to see an empirical observation of before and after the transition to continuous trading.
Note that American exchanges open and close with a batched cross. This hybrid approach is why most objections to intraday continuous trading is misplaced.
If you're talking about something like having an auction (per security) every N seconds, I don't see how that addresses the underlying issue, which is how to determine order priority.
If you have a bunch of orders at the same price on the same side, and an order comes in from the other side that crosses those orders (or there is an auction and there are orders on the other side which cross), how do you decide which of the resting orders at the same price should be filled first?
The most common way is that the first order to arrive at the exchange at that price gets filled first, and for that reason being fast is inherently advantageous.
If you're doing batches to reduce the advantage of being fast, you'd have to treat all orders that come in during a batch tick as simultaneous.
Resting orders from previous batches could have priority, if you want. You'd probably end up doing something with assignment of equal priority orders that looks like option assignment, basically random selection of shares among the pool of orders.
Personally, I'd fill unconditional market orders first, then market all or nothing (if fillable), then sort limit orders by price and from within limit orders of the same price, unconditional first, then all or nothing, then all or nothing + fill or kill.
I don't know if I would assign shares proportional to orders or to shares in orders. Probably shares in orders. Might be gamed, but putting in a really big order because you want to capture a couple shares is risky.
yes you are re-inventing the wheel, the CME has pro-rata markets: e.g Soybeans and Wheat where the matching engine is not FIFO
You could partially fulfil both resting orders, weighted by their (remaining) order size.
You might get "games" around people oversizing orders to try to get more "weight" to their orders, but that would be inefficient behaviour that could in turn be exploited, so people would still be incentivised to keep their orders honest.
How about along a randomized delay (0-T time) to each order? For T=30s it will largely nullify millisecond latency advantages.
> How about along a randomized delay (0-T time) to each order?
This is the sort of good idea that just entrenches the algos. (Former algorithmic derivatives trader.)
For small orders, these delays make no difference. For a big order, however, it could be disastrously embarassing. So now, instead of that fund's trader feeling comfortable directly submitting their trade using off-the-shelf execution algos, they'll route it to an HFT who can chunk it into itty bity orders diarrhea'd through to average out the randomness of those delays.
You fill the orders proportional to the order quantity for everyone
Randomize orders using a cryptographic hash of the order, client info, and all other fields plus a random salt added when the order is submitted.
Sort by hash. Impossible to game unless you can break the hash function.
So now I probabilistically spam a ton of different orders to on average get my desired fill... This just turns it into a "whoever is best at DoS'ing the exchange" game. As the orderbook fills with competitor orders it makes sense for yourself to also spam orders so each of your orders maintains the same probability of being filled
Exchanges have requirements imposed on HFTs to prevent this kind of abuse. This one would be no different.
Impose a small order fee.
That will tend to discriminate against smaller traders, like 'retail' traders.
> will tend to discriminate against smaller traders, like 'retail' traders
Retail rarely hits an exchange.
Retail usually has larger fees unless you mean Robin Hood which is monetizing you in other ways.
The non-terrible version of this proposal is called Frequent Batch Auctions. I've read the paper and it seems like a decent idea to me.
I have heard that some real-life venues have implemented the terrible version of this proposal instead though.
I've argued in the past that we should have batch settlements every 30 seconds, instead of in real time. We don't really need microsecond based skimming/front running.
I've read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations.
I'm with you. Every 30 seconds. Cap the power of connection speed in trading. Trading should be based on the value of the item being traded, not on how short the fiber run is.
> read the arguments that the microsecond trading serves a purpose that benefits all of us, but I fail to see how, even with the explanations
What about an empirical argument? Microsecond trading reduces spreads and decreases volatility. It looks useless, so people try to regulate it away, and every time they do spreads widen and trading firms' and banks' profits fatten.
> Every 30 seconds. Cap the power of connection speed in trading
I'd go back to Wall Street if this happened: it would make market making profitable again.
CLOB's force market participants to compete on pricing (which is only indirectly related to latency, since you can quote tighter if you know your orders won't get picked off by other, faster, traders) Taiwan used to have Batching style auction and it ultimately led to worse prices: https://focus.world-exchanges.org/articles/citadel-trading-a... > Our analysis of the TWSE’s transition clearly demonstrates that continuous trading results in better liquidity provision, lower bid-ask spreads, more stable prices and enhanced price discovery, as well as higher trading volumes.
so now the race is to get the order in (or out) @ 29.999999985 seconds or 15nS before the batch deadline. Interesting twist on the game. Unlikely to change who wins it, could it be worse for retail punters?
We need to kill "front running" as a criticism of low-latency algo trding with fire. It's garbage.
Front running is highly illegal and is where a broker knows a client is going to do a big trade due to inside information and trades on the account of others (themselves, typically) to exploit that inside information. It's a straight up cheat.
Inferring from market data alone which way a price will move is legal, honest, been attempted since forever and absolutely fine. Also very, very difficult. Anyone who can do it makes the market more efficient, reduces the money available by doing it (which goes into investors pockets through tighter spreads) and really earns their money. You don't have to like them if you don't want to but it's worlds apart from front running using inside information.
Where did algo trading profit come from? Won by being more competitive from brokers profit with a good chunk of that broker profit going to investors. Spreads are tighter.
Where are the clients' yachts? Well tech did something about the some of the broker ripoffs earning their yachts - which puts money in your pocket.
You could randomize the batching deadline.
and it won't help retail investors either.
Batching can greatly lower the returns to speed, which would be sufficient to get participants to invest less in speed. It doesn't need to reduce the returns to speed to 0, and indeed reducing the returns to speed to 0 is sort of an incoherent idea to begin with.
If there are multiple orders at the same price on the same side, how should we determine which ones are filled first?
Or put another way, how should we determine which orders are least likely to get filled?
Well either volume weighted or randomised then
HFT is still a massive thing in volume weighted (we call them pro-rata) markets, and it's even more toxic for retail as lots of people submit large orders that are unlikely to get filled immediately (which retail doesn't have the $ to do) to secure a bigger share of the pie
30 seconds seems reasonable. Don't the markets themselves make a fair amount of money off of providing fast access to the HFTs? Is that the primary perverse incentive?
Why not 1 minute then?
You have ignored the whole issue of how are you then ordering those contracts in 30second batches?
We already have systems for that, I believe it's the highest prices get filled first, but I'm not a trader.
Certainly systems exist, it was mostly a rhetorical question though. People love to say just run batches every N without diving into the complexity that exists. All they would accomplish by creating batches is wider bid/ask spreads.
I have made many very good arguments before that 45 seconds is ideal.
There are cases to be made that you get tighter spreads.
The larger the time interval the larger the risk on pricing. If I am selling and it’s a large time to trade I am going to probably want to sell it for a higher price. The same goes on the bid.
Skywave has a point, they were through regulatory oversight to get their microwave working whereas these other firms went behind the FCC’s back and profited by not doing so. The fine is likely a lot lower than the profits they made so what incentive would future companies have to go through the proper channels?
> Experimental licences have more bandwidth than commercial licences, allow for “frequency hopping”
I think there is another angle to this where the modulation scheme makes a lot of the difference. If you figure out a way to send market information below the noise floor (i.e., spread spectrum schemes) how would anyone even know what you are doing?
If I was operating an HFT firm I would be all-in on the "ask for forgiveness, not for permission" angle, because the politics around my business are really nasty.
> I would be all-in on the "ask for forgiveness, not for permission" angle, because the politics around my business are really nasty
Securities are one of the few parts of the American economy regulated like a European sector. Complain-investigate regimes. Various agencies at multiple levels who can ban you from industry and fine you. The rational response is to turn risk tolerance down, not up.
some fantastic older reading (2014):
HFT in My Backyard
https://news.ycombinator.com/item?id=8354278
https://news.ycombinator.com/item?id=8371852
Also, from the same blog:
Shortwave Trading | Part I | The West Chicago Tower Mystery
https://sniperinmahwah.wordpress.com/2018/05/07/shortwave-tr...
SHORTWAVE TRADING | PART II | FAQ AND OTHER CHICAGO AREA SITES
https://sniperinmahwah.wordpress.com/2018/06/07/shortwave-tr...
If you want even more fun reading, check out: http://www.nanex.net/aqck/aqckIndex.html
It’s the only site I know of that has posts like it. Sadly, he hasn’t posted in awhile.
He fell off the rails when Trump was running for office. I remember being really disappointed to see the trading observations (and his own product advertisement) replaced by political rants.
why would a radiowave that is reflected off the atmosphere (and therefore taking the longer route) be faster than a direct fibre cable?
Radio waves travel at nearly the speed of light, whereas light in an fiber optic cable travels at ~67% of the speed of light due to the refractive index of glass.
Ericsson blog wrote:
In a vacuum, electro-magnetic waves travel at a speed of 3.336 microseconds (μs) per kilometer (km). Through the air, that speed is a tiny fraction slower, clocking in at 3.337 μs per km, while through a fiber-optic cable it takes 4.937 μs to travel one kilometer – this means that microwave transport is actually 48% faster than fiber-optic, all other things being equal.
I worked for three years designing custom low-latency point-to-point microwave radios for HFT for this very reason. They didn't need very high bandwidths (their long-haul network was less than 200 Mbit, whereas in New York/New Jersey we had about 5 Gbps because the hops were much shorter and they had licenses for more RF bandwidth at a higher frequency).
At those time scales, the difference is so large, it was incredible what they were willing to pay to build these networks!
I somewhat regret not specialising in RF/comms in my EE degree - this side of HFT sounds like a fascinating line of work (Trading at the Speed of Light was a great read).
I doubt there's much here that's cutting edge. Any digital processing that's done in typical radio's to correct for channel impairments is avoided as it just adds latency. Meanwhile LTE is using as many digital techniques as possible to maximize bandwidth (MIMO, HARQ, OFMDA)
Haha, you got us :) - in terms of the digital side yes, kind of. We’d even try to not have any digital in the path if possible on some hops! We did have things like LDPC (and different FEC on control packets) but it was definitely not as complex as LTE or newer cellular or WiFi standards. But what was avoided digitally meant far more work going into the analogue side to improve SNR, dynamic range, NPR etc. through the signal chain.
More bluntly: light in a fibre is still bouncing around a lot.
Not in single mode fibers, which still exhibit the effect. It's just about the refractive index of glass.
Speed of light in an optical fibre is about 2/3 that of the speed in air
In addition to the radio signal being faster as noted by the other commenters, for long distances the radiowave is actually the shorter route.
If you take one of the routes in the article, Chicago to Sao Paulo.
The distance is about 8,400km in a straight line.
According to https://en.wikipedia.org/wiki/Skywave a single shortwave hop can reach 3,500km, so 3 hops are required, or about 30ms.
The shortest commercially available submarine cable between the US and Sao Paulo alone is significantly higher than that (almost double), and it comes out of the east coast, so you'd still have to factor in the latency between Chicago and New York.
Even specialized low latency networks that mix wireless and fiber will still have much higher latency than the radio.
The tradeoff is that shortwave radio has very little bandwidth so you're restricted to simple signals.
Light doesn’t go at light speed through optical fiber.
Sure it does. It's just that the speed of light in non-hollow optical fiber is slower than light in a vacuum.
Microsoft bought a hollow optical fiber company for a reason.
Huh, 50% faster. https://spie.org/news/photonics-focus/julyaug-2022/speeding-...
Yes, funnily enough Microsofts reason was not HFT but AI. Essentially inter-datacentre training is limited by latency between the datacentres.
Generally they want to build the datacentres close to metro areas, by using hollow core fibre the radius of where to place the data centres has essentially increased by 3/2. This significantly reduces land acquisition costs, and supposedly MS has already made back the acquisition cost for Lumenisity, through those savings.
That feels somewhat implausible. I assume a Microsoft sized data center starts at over $100 million. Moving the footprint X miles away might be cheaper, but is probably a drop in the bucket given everything else required for a build out. I would further assume that they were already some distance away from the top tier expensive real estate to accommodate the size of the facility.
Its reality. Its generally about site and infra access, including power and fiber paths. The bigger providers (eg AWS) simply dont have more feasible sites that are within a few ms of the existing region DCs. Expect to see more infrastructure like “local zones” or AZs that are tens of ms away from the rest of the region.
By definition, it does, because the maximum speed is qualified by "the speed of light in a vacuum", so the speed of light [in other media] is simply a function of how much the medium slows it down, yet it is still the speed of light. Funny how that works!
Remember when HN was always HFT comp dick swinging contests? Would any top grad nowadays go to Jane Street over OpenAI or Anthropic?
No, I doubt it. At the same time, OAI and Anthropic both hire waaay less people straight from undergrad whereas Jane Street (and similar) are a lot more realistic, and it's not like the pay is bad.
> Would any top grad nowadays go to Jane Street over OpenAI or Anthropic?
You’re measuring which part of the economy pays math majors best. If I had to trace the centre of gravity of my top former colleagues, it was banks and hedge funds (never Jane); Uber; some leakage to crypto, they never recovered; now AI.
mev but for tradfi
FT Alphaville: High frequency trading
Skywave Networks accuses Wall Street titans of ‘continuous racketeering and conspiracy’
FT/Alphaville is blog attached to The Financial Times newspaper. It free to sign-up for an account.