Windows Server 2008 IP Stack Source IP Selection Logic

This one costs me a few hours, apparently Microsoft has changed the TCP/IP stack logic for which IP address is chosen as the source of a packet. What kind of problem was this creating? During the setup of a Microsoft UAG/TMG 2010 server I had configured the outside interface with DNS pointing to Google public DNS services at 8.8.8.8 and 8.8.4.4 (internal DNS at the client site had no forwarders configured) and no matter what rules I added in to TMG I could not get past a denial because the origin interface (which Windows was selecting at the SSL Tunnel virtual interface) was different than the outside destined interface. When I configured VPN support I configured the client address pool in the 10.101 range. Based on the matching bits algorithm Windows was choosing the SSL VPN tunnel interface which had the address 10.110.0.200 as the source for the DNS packet to 8.8.8.8 because the first 6 bits matched! I changed the DNS servers on the outside interface to use servers with a higher IP address that would more closely match the highest order bits of the outside interface address and access to the Internet from the UAG/TMG server started working. Seems a bit absurd, no?

DNS Server -> 8.8.8.8 -> 00001000.00001000.00001000.00001000
SSL VPN Virtual Interface -> 10.110.0.200 -> 00001010.01100101.00000000.11001000
Outside -> 155.x.x.x -> 10011011.
Inside -> 63.x.x.x -> 0011111.

For more information on this change in Windows, http://support.microsoft.com/default.aspx?scid=kb;EN-US;969029

 

 

 

written by

The author didn‘t add any Information to his profile yet.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>