So I learned a little this week about sockets and it has given me pause to think about the realities of 'success' in regards to MASSIVE the adoption of the protocols that I tend to talk about on this blog.
They say a little knowledge is a dangerous this... well here I go... head first:
DNS resolution has been under attack recently (last 6 month) from a new set of poisoning attacks. One of the main reasons the attacks work is because DNS uses UDP and not of TCP. The basic fix that has been implemented is Source Port Randomization but even that has been brute force attacked.... so people speculate as to what else could be done. One idea was make every request twice and the answers MUST match (this is known as debouncing). Another option proposed is, just use TCP instead of UDP.
So here's what I find interesting... The debounce option was rejected because it would double the amount of traffic on the DNS system; we would go from 2 packets on the wire to 4. It has been determined that the current DNS infrastructure is running at over 50% capacity so instantly doubling the load is simply not an option. SO... why not use TCP? Well, if you use TCP you have the 3 way handshake, then the query, then the response and then the fin and the fin ack.... 7 packets on the wire (and larger packets at that). So I find all of this fascinating in a purely academic way, this stuff is all new to me. (now I have a basis on which to go understand DNS Sec, that'll be next week's reading)
Then I wander... is anyone doing the math? IF OpenID became ubiquitous, or InfoCards did, what would that look like at a packets on the wire level? Is there so much spare bandwidth and processing power now available that we don't have to worry about this?
Thursday, October 16, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
In a word; yes. In another word; latency. Network engineers know only too well how much of a burden on the user experience & income (ask Google, Amazon or read about Obfuscated TCP) extra round trips cost.
What I find most frustrating using the internet and the likes of OpenID is latency. Especially because I live in Australia and the hop to the other side of the equator is - more often than not - a wait-and-see experience.
At the speed of light just to get from Sydney to San Francisco takes 40ms and that doesn't include switching, filtering or processing of packets. As you've mentioned; with TCP you have to double those. In addition, when your sending multiple syn/ack packets just to set up secure connections, those numbers can quickly add up if your requesting data from one site that relies on data from another site when the requests can't be paralleled. Why I believe this networking model for identity may be the wrong approach.
For me it's imperative infrastructure minimise use of sequential packet protocols that require always going to the source through a secure pipe. Hence we should be looking more at the content-centric networking model for identity. Which of itself has it's own issues to be solved.
Post a Comment