WIRED and Ad Blockers

I get it,  the site makes its money off the ads. I rarely read their articles anymore. When I see something interesting, it surprises me that I do not read it anymore, so I click the link. Then they interrupt me reading to complain about having the ad blocker enabled. Trying to be a good person, I change the settings to allow the ads like they want.

Here is the kicker, though, they interrupt me again to say “Thank you.” That… That makes me so angry that I revert the settings to block the ads and close the tab. At that point I remember why I no longer read the site. I came to read not get prevented from reading. Just let me read.

Restore Chrome New Tab Tiles

Occasionally I accidentally remove one of the tiles from Chrome’s New Tab. I try to live with it, but after about a week the annoyance at the situation becomes too great. There IS AN UNDO, but once you click away from the page it is no longer an option. I did click the undo link this last time, but it did not restore the one I inadvertently removed.

So then I end up looking for how to restore them all. Since I have had to do this twice, to save myself time in the future:

    1. Click the menu. (Upper right three lines.)
    2. Click Settings.
    3. Open the Advanced.
    4. Click the “Clear browsing data…” button.
    5. Check “Download history”. (Uncheck everything else.)
    6. Click “Clear browsing data” button.

That should restore all the tiles. It is sad there is no view of all the removed ones to pick and choose which to allow back.

Regarding #5, picking other options like cookies could make one have to re-login to accounts.

Live HTTP Headers Equivalent for IE or Edge 2016

Over the years, my Live HTTP Headers Equivalent for IE post has pretty consistently gotten a few hits a month. Maybe that is because Google still ranks it #2 behind a StackOverflow post from 2010. I decided to update it since the post is from 2007 and what is available has changed.

The original issue was end users having a problem downloading office files from our web site. The issue only happened in IE, so we could not get them to look at headers using Firefox to diagnose the problem. The users did not want to use Firefox or maybe could not at work environments not allowing them to install alternative browsers.

Maybe – Free

Sorted in order I’d probably recommend.

  1. F12 Developer Tools (for Edge; IE Developer Tools) – looks very much like the Web Developer tools for Firefox and Chrome. The Network tab captures which pages are taking forever to load. Click on a specific request displays the request and response headers.
  2. iehttpheaders – Dunno if much has changed, but this was the better of the two from the original 2007 post.

No Way – Free

These are too scary or complicated to be something I would want to have to walk end users through using. Fine for power users, but not my purpose.

  • Fiddler – disappointed the logo is not a crab. Listens in the background and captures all browsers. All our stuff has encrypted traffic which Fiddler can only see by installing a CA called DO_NOT_TRUST, which there is no way I am going to ask clients to do.
  • Wireshark – probably okay for a power user, but not most people in the general public.

No Way – Paid

Not really useful for my purposes because this was about having end users install something to help us figure out the source of their trouble.

  • DebugBar HTTPTab – Looks viable, but it is essentially the same as the F12 Developer Tools. Has issues with other integrations.
  • HTTP Debugger – Sniffs all HTTP traffic.
  • HttpWatch – free version only works with well known sites. Have to get paid version to see our stuff.
  • HTTP Analyzer – trial version. Has a warning the technology it uses likely causes antivirus software to think it is malicious software. Difficult to explain to users, hey, use this thing your computer will likely complain is a virus.
  • IEWatch – IE plugin. Ancient and has not been actively developed in 9 years. Newest OS reported to support is Windows Vista, so it might have issues with more recent ones like 8 and 10?

Web browsing history

The history of what I have looked at in my web browser should be a feature I like. I know I read something this weekend about work ever expanding to fill the time. Even as efficiencies make things easier, there are places where waste balloons to make people work more than they really need. I eventually thought the example used were lawyers creating work for each other by overwhelming the opponent with too much information so they have to sift through more. It turns out that was correct.

In the middle of the week I ran across a couple articles about how automation while killing off some jobs will create others. I wanted to include the article from the weekend, but find it was a royal pain in the ass. About ten minutes in I wished that I had sent it to my boss like I thought I should just so it would be easier to find.

Eventually I located it to include in yesterday’s blog post. All it took was finding the right keyword.

I hit so many web pages, search is really the only way to find something so specific. And even then, I have to my library training to find something I want.

Bookmarks or Evernote or save later for services are not that helpful because I have to have the forethought to save them. All too often the things I save are not what I need later and things I failed to save are what I do.

I guess what I want is a smarter web browser history search which can figure out from my browser history what is related to a specific page.

For Want of a Scrollbar

The start of an adventure usually starts when I tweet an annoyance:

Who has two thumbs and regularly disables Sharepoint’s overflow: hidden CSS to re-enable the scrollbar? Me…

A coworker asked a good question, which is, “Any easy/lazy way to make it automatic-like?”

My response was a Greasemonkey script should do the trick. Okay, so, how to make it happen?

Pretty sure like me, my coworker uses Chrome. This is good, because in 2009 Chrome acquired native Greasemonkey script support. They are treated as Extensions. I like this because there is one place to look for the scripts rather than a separate queue like I am familiar in Firefox’s Greasemonkey plug-in.

So I found some pages on writing Greasemonkey scripts. What I wanted to do looked easy enough. Which, of course, meant I spent a few hours stumbling around the Internet confused why it did not work. In the end, I wrote this <filename>.users.js did the trick:

// ==UserScript==
// @name Sharepoint Scrollbar Fix
// @namespace http://sharepoint.oursite.com/
// @description Removes the overflow:hidden which is buggy in WebKit browsers
// @include https://sharepoint.oursite.com/*
// ==/UserScript==
document.body.style.overflow = “scroll”;

From my research WebKit browsers have an issue with overflow:hidden going back years. Chrome and Safari are WebKit browsers. (Guess I could have saved myself time just using Mozilla.) Using either overflow:scroll, overflow:auto, or even removing overflow brings out a second usable scrollbar.

Probably GM_addStyle is a better approach, but this one worked first.

Protocols matter. Most of the time I spent confused was solved by having http in the @include address when the Sharepoint site uses https.

Testing it was interesting as Google does not allow just downloading from anywhere on the Internet. So uploading it to my web site was not a good way to get it into the browser. Just open up Extensions and drag and drop the file in there. It prompts to make sure you are. In the end, it is much more efficient that way.

Conclusion: Pretty easy to create and test. Very lazy fix. The information online about making one is not great.

Any coworkers who want to use it, I added it to the Content area on my site.

Context Menu

Almost everyone using a computer to access the Internet uses the left click on a link to go to its location. Exceptions might be left handers who switch the buttons on a mouse, those using screen readers, or similar small niche users of the Internet.

I tend to multi-task, so I will scan a page and open all potential links I want to check in a new tab. The  way I accomplish this is the browser’s context menu with a right click on the link. In both Mozilla Firefox and Google Chrome, the open in new tab (or window) are the first options.

Since my exactly what I wanted to check does not persist in memory, opening them all up in their own tab, lets me not have to remember. I can just circle back through the tabs.

So any time a web designer changes the context menu so it is not there, my blood pressure rises.

A decade ago, web designers were terrified of people stealing photos and source code, so they would disable the context menu. Back then, I would turn off JavaScript from running, go to the page, download their images and source code, then email it to them as a proof of concept that all they did was annoy people.

Today, it seems my nemesis is a support portal where the right click on a link operates the exact same as a left click. At least Ctrl+Click still opens the item in a new tab, which is what I want. I did not name the company in hopes it takes them longer to not break my workaround too.

P.S. It appears that they keep track of the last page visited, but updating a ticket does not make it the last one visited. So I end up somewhere else.
🙁

Interactive Archives

My jaw dropped at the end of this blog post Cloud Hosting and Academic Research.

There is a value in keeping significant old systems around, even if they no longer have active user bases.  A cloud hosting model seems so right to me–it’s scalable and robust. It just makes sense. But the hosting costs are a problem. Even if the total amount of money is small, grants are for specific work and have end dates. I can still be running a 10+ year old UNIX box, but I can’t still be paying hosting fees for a research project whose funding ended years ago, no matter how small that bill is.  Grants end–there’s no provision for “long term hosting.”  Our library can help us archive data, but they are not yet ready to “archive” an interactive system.  I hope companies that provide hosting services will consider donating long-term hosting for research.

Opening up a new area of digital archives by preserving the really cool works of the faculty seems like something I might enjoy.

My mentor in web design and server administration might have been described as a pack rat. He… Well, I guess, we kept around versions of web pages a decade old. Nothing really found deletion. The public just missed it by use of permissions.

When building my portfolio, my mistake was not gathering up the whole files to replicate the sites I designed. I’m no longer doing web design or even programming. So it is okay.

A professor in Geology had a pretty cool Virtual Museum for Fossils. The site moved around a few times, eventually ending up on the main web server also hosting WWW. Of course, HTML, images, and Flash files are easy to archive. Take the files and place them on a web server. Since they are static, it is easy to keep around for a long time. As long as the standards remain honored, they should be good. Developers of web browsers have pressure to go for the new, which potentially abandons the old eventually.

Scripted web sites using Perl, PHP, ASP, or JSP, JavaScript, or AJAX require a working interpreter. Still, some things might not be backwards compatible.

About a year ago my mother ran across 8mm video film. An uncle found a place who converted it to DVD. Will we even be using DVDs in a decade? Maybe the 8mm needs to go on Blueray?

Going back to the scripted web sites, should an archived web site’s code be updated to work on the new version of the interpreter? Maybe. If makers of the interpreters allowed for running in a backwards compatible mode, then all would be good. Even better, to be able to add to a script a variable that tells the interpreter which back version to pretend to use. For administrators, they could have the programmers check non-working scripts by just telling the interpreter to simulate an older version.

Collusion on Firefox

As we browse the Web, our browsers picks up cookies. Many sites will give our browser advertiser’s cookies. More importantly, the advertiser’s servers can look to see whether we have their cookies and where we obtained them. This is how they record our browsing habits. The more places they advertise, the better they are able to track us.

Collusion is an addon for Firefox detects the cookies added to my browser in order to identify the parties tracking me across sites are. For example, I just installed it and visited Google which dropped cookies for doubleclick, rubicon, bluekai, and others. I then went to the New York Times web site which added Nielson, Doubleplex, and other such as doubleclick.

This is going to occupy me for days… Maybe even weeks.

Read Later Shotgun

Back in my Netscape 3 days, my bookmark.html was incompletely saved losing about 2/3rds of the file. Researching how to fix it revealed to me the file was just an HTML file. My new editing skills could not recover it, but I could make a new copy and fix individual entry losses. Making a copy onto a floppy disk meant I could take my bookmarks with me. I noticed that saving them longer enabled me to preserve a bunch of sites I did not go back to see.

Somehow I decided to maintain my own home page that lived on the floppy disk. Pages I wanted to read later, I would add to the bottom of the home page. A few years later, I created a password protected secret page in my work personal web site to replace the floppy. The strategy was the same of keeping an HTML file. Stuff I did read, I removed from the file. Ugly, but it worked.

Then I started blogging. Reminders to myself to read something came by posting them to my blog. As I was constantly in my blog, I did go back and read things. Not removing read links meant confusion and sometimes multiple reads. Eventually I stopped reading links saved for later.

Bookmarking and clipping web sites arrived, especially their exploiting code placed in the toolbar to record bookmarks. I tried several: Evernote, Delicious, Magnolia, Diigo, Instapaper. However, I found I rarely went back to look at what I saved. Saving entries was easy. To see what I saved required going to the site, which I rarely did. Often by the time I did go back to read bookmarked items, they had slid behind the paywall or expired, so pure bookmarking sites were awful. Clipping had its own failure in that multiple paged articles made saving content a pain and reduced the likelihood I would save it to read later.

Most of my online reading came from blogs, so I tried to use my RSS reading to handle it. First with bloglines and later with Google Reader, I thought starring entries would perfectly handle what to read later. Keeping entries marked as unread certainly did not as it has the annoyance of automatically marking as read anything older than 30 days. The feature to tag posts with something like “read later” helped. It works but only for posts in GReader.

Chrome added an Apps feature. Surely Read Later Fast would be the solution. It is in my web browser, so like the home page it is around all the time. Like the saving web sites it preserved the whole page. With a single click I could dismiss it as read. I just… forgot it was there. (I just re-installed the app and connected it to diigo.com to find over a dozen items from over a year ago.)

Guess what I really need is something like Read Later Fast to have an icon in the address bar to remind stupid me there is stuff for me to read. (I use One Number to remind me I have Gmail and GReader posts to read.)

Textarea Backup

I am going through my software installed on my work computer in order to transfer to a new one. This came to my attention as something potentially relevant to others.

A common problem we hear doing web-based learning management system is the web browser crashed before the user could submit a form. The complaints we hear usually are because an assignment was lost so the student received a 0 for a major grade. The ones who managed to redo the assignment in time generally never reach us. Nor do the mail messages or discussions or anything else not for a grade. The causes are many. Naturally the blame lies with us for running such a crappy product. Smart applications like WordPress post/page editor automatically save these boxes. Unfortunately, 99.99% are not smart.

An interesting Greasemonkey script, Textarea Backup, will preserve information written into a textarea form element. When the browser restarts and returns to the page, the information written into the textarea will be there.

Google Chrome does native support for Greasemonkey scripts. Mozilla Firefox still requires the Greasemonkey add-on.

With Greasemonkey installed, one can just hit the install button on a scripts page at userscripts.org and click through the various confirms one really wants to download or install it. Pretty simple to install.

Do colleges or universities actually encourage add-ons like Textarea Backup to students? Or are they left to figure out stuff like this on their own?