Random 404ing

From: CHYRON (DSMITHHFX)28 Sep 2013 12:20
To: ALL1 of 10
I deployed a well-tested new subsection of 3 web pages to a live site yesterday and after some initial, not unexpected caching issues (which quickly cleared up), the new pages were working from all browsers and OSs tested in: XP, W7, OS X, Linux (2 flavours), Android, Chrome, FF, Safari, IE 8 & 10, both at work and (later) home. I even tested it in compatibility view on IE. Still worked for me.

Then I got an email saying two of the three pages (which were all derived from the same template) are throwing 404 errors, both at the client's office and another external business network (there have been no such issues throughout testing of the new pages from our staging server to both locations).

I ran this on the web site http://www.brokenlinkcheck.com and it reported no broken links.

Anyone run into this before? It's a first for me. Could it be a caching issue (but these are new pages)? Aggressive time-out settings on the apache server (it was acting pretty slow, I suspect cheapo cloud hosting)? Firewall issue on the user end? It's just weird that one of the three new pages works, and the other two are not exactly broken, just their urls are "not found" from some, but not all external networks. I don't know how to begin troubleshooting this.
EDITED: 28 Sep 2013 12:25 by DSMITHHFX
From: ANT_THOMAS28 Sep 2013 12:50
To: CHYRON (DSMITHHFX) 2 of 10
Have they been sent links to click? Upper/lower case issue with the urls?
From: CHYRON (DSMITHHFX)28 Sep 2013 13:51
To: ANT_THOMAS 3 of 10
Nope, urls are fine, and I've pretty much ruled out code/server issues. They're using IE9 which I haven't been able to test against (though as mentioned it's all working from staging for them). If it hasn't cleared up by Monday, I may have to do an on-site visit to check myself what's going on.
From: CHYRON (DSMITHHFX)28 Sep 2013 19:08
To: ALL4 of 10
Tested in IE 7 & 9 here http://browsershots.org, and from another external network (wifi at coffee shop) All loaded it fine. Also ran it through several proxy sites, no problems.
From: Dan (HERMAND)28 Sep 2013 20:01
To: CHYRON (DSMITHHFX) 5 of 10
The only time I've seen something similar to this was when a sites proxy server was mashing the headers in the request. In that particular instance, it came down to a GZIP compression issue.

(For anyone who cares about the specifics, this particular site returned a GZIPed page regardless of what was asked, and the proxy server was then stripping the encoding header on the response confusing some browsers)
From: koswix28 Sep 2013 22:23
To: CHYRON (DSMITHHFX) 6 of 10
Perhaps you should roll back to the previous version in the git and test this fork some more.
From: CHYRON (DSMITHHFX)29 Sep 2013 01:03
To: koswix 7 of 10
I may wind up doing the equivalent if the issue persists on Monday.
From: CHYRON (DSMITHHFX)30 Sep 2013 15:55
To: ALL8 of 10
Problem solved. Added:

<script>
   $Git is for losers();
</script>

to page headers.
From: ANT_THOMAS30 Sep 2013 16:00
To: CHYRON (DSMITHHFX) 9 of 10
:'D

What was the issue?
From: CHYRON (DSMITHHFX)30 Sep 2013 16:07
To: ANT_THOMAS 10 of 10
Turns out we were talking about two different pages with similar names and urls, and not bothering to read each others' emails or click the links in them. Neat, huh?