## Outlook Data File Corruption

Outlook became unusable. I tried switching to the webmail, but my workflow is such that I essentially stopped checking email for last week. Meeting invites went unseen. Notifications missed my attention. Every strategy I tried to ensure that I saw the email and calendar were ineffective. So, I kind of need the application to work.

The issue appeared to be some kind of file corruption. The application would crash due to “a problem.” When I opened the application, it would claim there was an issue with the data file and ask to repair it. I allowed it. It would make a backup and repair the file and tell me all is good. Things would be fine for a while until it happened again.

Back in November it just happened twice. Then in early December, it was a couple times a week. In mid-December, it was a couple times a day. Finally, today, it would only stay open for a couple minutes.

I decided that since my data is on Exchange, that deleting the files should not really be catastrophic. Outlook should just rebuild the data files for me. So, I renamed the data files. (“Should” != “definitely would.”) They are located in C:\Users\<username>\AppData\Local\Microsoft\Outlook and have the file extensions .ost or .pst. I chose to rename them with “bad_”. I started Outlook again. It rebuilt the data files. I have not seen a crash since.

My guess is the repair did not actually fix the problem. Certainly, the repair tool kept identifying things to fix.

## Ambigous Direction

I spent far too long stuck because of the direction: “Locate the level that you want to configure.” To me, that implied going to the application or the site. Only the “ISAPI and CGI Restrictions” I wanted was not there. I did all kinds of things thinking that somehow the feature I needed was not installed because I did not see it.

It turned out the setting is only visible at the topmost (server) level. Various documentation fail to describe looking for it there.

Team lead of a group was curious which of his people have access to a certain system. I know the information is in an Active Directory Group. I knew where to find the group.

So, I pulled up the AD Users and Computer, found the group and was dismayed because I was looking at three screenshots to get the information because the Properties box is not expandable by size. That bothered me quite a bit. See, I would live with one

Enough so, that I decided it would be worth writing a Powershell script to dump the data. I am sure other coworkers have one. But, I did a quick Google and found something that looked way too easy. The last time I tried to do this, I had to script connecting to the AD server, searching through the forest, etc. This was one command. I added a couple options to make it better presentable.

Get-ADGroupMember <group_name> | Select-Object -exp name | Sort-Object

## Complicated Calendar

Noticed “Election Day” was in the wrong place in my work calendar. It was on November 1st when it ought to have been November 8th. This is because the date is the first Tuesday after the first Monday giving it a potential range of November 2nd through 8th.

The Microsoft Outlook / Exchange pattern for recurring dates has nothing to accommodate this. The closest pattern Outlook has is “on the first Tuesday of November” which leaves the date that occurs this year as the lone exception. Looks like it will happen again in 2022 (non-presidential) and 2044. It last happened in 1988.

I think this came from a Microsoft list of United States holidays I added. Certainly, looking at that, I have “United States” checked for the locations shown in my calendar. All the entries appear to be entered for each individual date rather than patterns set to handle regular recurring dates and just individuals for the crazy ones.

The only other calendar entry that I can think of that is so complicated is Easter. OK, actually it is far more complicated. Well, Easter is dependent on Passover. Passover depends on the full moon. And Orthodox churches have a different calculus than than Catholic/Protestant churches. Everything else is on a date or first/second/third/fourth day of week of the month. These are all easy to program with a rule.

## Outlook 2016 Appointments

I got Office 2016 at work, so I am struggling through moved cheese. The big change for me is the To-Do appointments. In 2010, it showed several days worth of items. Which is ideal for me. I do not have a ton of meetings, but I like to see a list of what is upcoming so I know to prepare for upcoming ones. It is not in my workflow to look switch over to the calendar and look ahead unless I am looking for something I think I have that is not represented in the To-Do appointments.

Office 2013 shortened the list appointments list to the current day. WTF? Apparently Microsoft recognized the problem and added back a few days ahead. I prefer getting to see at least a week ahead.

So, I looked for some information about the problem, but there were no configuration change to address it for me. Instead, there was an Add-In called “Outlook 2013 Add-In” on CodePlex which looked promising. I installed it and am very pleased. It was to show 14 days. I might even need to switch over to the calendar even less than I did prior.

On the way to discovering the add-in, I found recommendations to use the Outlook Today feature. Unfortunately, it displays the same content as the To-Do. So not very helpful.

## Integrate PeopleMap With Office

I work to integrate systems. So, when I learn about things, I guess my mind drifts into how would we use it. And then into how would tie together this with other things we have to make them better.

Last week news dropped about Microsoft (MSFT) buying LinkedIn (LNKD). The big deal people seem to be making of it is the Customer Relationship Manager (CRM) potential for Microsoft. Imagine in Outlook having a guide about whomever you are emailing. LinkedIn potentially could supply the data.

So Friday I also took a PeopleMap System communication training. (Leader-Task) The idea is that people have innate preferences for how they process information. Understanding their preferences and tailoring your communication to key off them will make one more effective working with them.

I guess the MSFT-LNKD deal was still on my brain because it seemed like what we really needed was a PeopleMap plug-in to Outlook which would remind us the type of the individuals we are emailing. My vision was since everyone was providing management with our types, that information would be populated into the directory service. Then a plug-in would use the email address of the recipient(s) to display that person’s type and perhaps advice on how to communicate with that type. No more wracking one’s brain for what is their type and how to deal with them.

Of course, I used Google to look to see if this already existed. It pointed me to PeopleMaps which is a service for exploring one’s social network to find connections to sales targets and get an introduction and avoid cold calls. Microsoft’s Social Connector would pull photos from Facebook for contacts.

## Search Within Files

I keep some logs in a directory just in case I need to reference them later. The kind of data that has saved my bacon on a handful to times. Of course, it has over 3,000 files (16GB) in the directory. Of those less than a hundred were potentially relevant. And in the end only a couple dozen had the data I sought.

Windows Explorer used to make this easy to search. I could put in the pattern I wanted and tell it to search the text within them. It would give me the file name for each containing the search string. For whatever reason Windows 7 had to make it more difficult.

So, I wrote the easiest of Powershell scripts:

$filelist=D:\path\to\*files*.log Get-Content$filelist | Select-String -pattern “Search String

Good thing too because apparently I need to to go through my Indexing Options and identify every file extension I want to search to index file contents. What a royal pain. My guess is doing so would also blow up the EDB data file from its currently 2GB to something way larger. 10GB? 50? 100? Yuck.

## Leading Zeros In A Batch Timestamp

As “plumbers” for 17 databases and over a hundred application servers, we really do not have the time to sit there and watch them all. We design things so problems are pushed up to our attention. We are still getting point of predictive alerts to failures like we were on RHEL/Weblogic/Oracle, so usually we only get an alert after the failure.

The lack of a log for a component means when it fails, we know nothing of why. In this case the vendor setup the component and wants to know why it failed even though they did not set it up to collect any information. Playing with this component manually I noticed it sends information to the screen, so I wrote my own wrapper script to capture this information.

First stab at it used the vendor’s timestamp method:

SET mydate=%date:~10%_%date:~4,2%_%date:~7,2%
SET mytime=%time:~0,2%_%time:~3,2%

These values were plugged into other variables for the log names so I get a log for each run. (standard out log and standard error log)

The afternoon I worked on this script appeared to have no problems. The script was scheduled for the morning which is when the vendor had it run. Review of those logs showed instead of being “name_2014_01_23_04_51.log,” I got “name_2014_01_23_.” It was clear it broke on the hour. So I ran:

echo %time%

This returned a time with a leading space instead of a leading zero. That seemed strange. So I started Googling. Turns out this is a very common problem. The solution I chose came from Need leading zero for batch script using %time% variable.

:prepare time stamp
set year=%date:~10,4%
set month=%date:~4,2%
set day=%date:~7,2%
set hour=%time:~0,2%
:replace leading space with 0 for hours < 10
if “%hour:~0,1%” == ” ” set hour=0%hour:~1,1%
set minute=%time:~3,2%
set second=%time:~6,2%
set timeStamp=%year%_%month%_%day%_%hour%_%minute%_%second%

Uglier. Though, it did remind me to use the call command to put this logic into a another script file so I can easily re-use it for another script I will inevitably need to write.

And how I much prefer Powershell or batch scripts.

P.S. I left out the first solution was to put the log file names in double quotes. This is a common way to deal spaces in Windows file names. That was in no way satisfactory to me. It more just confirmed the problem. So I went Googling.

## Counting Folders

Back in June, after a whole seven months in our system, one of our clients hit Microsoft’s limit in the max number of folders within a folder. They are on track to do it again a year later.

I lost the first script I wrote to figure out which directories were affected, so I wrote a new one yesterday. The problem directories are all in the same path with the schoolcode\courses as the name. A configuration for the name of the folder to use for the school has the application create new folders for each new course at this location. When this folder fills up, the solution is to add a number to that name so a new folder is created and subsequent courses put there. So I looked for all directories fitting this pattern. Then I loop through the list and count the number of objects in each directory and post to the output this count.

$logdate = Get-Date -uformat "%Y%m%d%H%M"$sites = 'nas1 nas2' # Loop through each NAS foreach ($site in$sites) {

# Compile the list of directories for NAS $dirlist = Get-ChildItem \\$site\share\path\to\client_stuff\*\courses* # Loop through found directories foreach ($diritem in$dirlist) {

# Count $getcount = (Get-ChildItem$diritem).Count # Post to output Add-Content \\nas1\share\to\scripts\LOGS\count_directories_$logdate.log "$diritem  ==  $getcount"  }  } Obviously, I do not really want to run this script every term or even six months. So I really need to design a check for our monitor service. It might report any who cross over 50,000. That would give us a 15,533 cushion to have configuration changed to start using a new folder. The correct threshold is hard to choose. As is, it is slightly bigger than the largest term for our largest client. It would ensure no surprises. With something like 5,000 it is possible for us to get no warning one day, then the start of a new term hit the limit when the client sent 14,795 courses for the Fall 2013 term. But really only that one school has this problem. The next two largest clients had 8,246 and 6,364 courses for Fall 2013. (Combined almost the same as the biggest.) Really, though, the problem with a check is Powershell takes 2 hours to count this amount of stuff. So probably I need to record this data to a database and have the monitor check this database. ## Windows Module Installer A pain in my side over the past year finally forced me into addressing it. Windows Module Installer runs as TrustedInstaller.exe and for most cases just does its job which is to keep in touch with the Windows Update service and apply the updates sent to it. Occasionally they develop a memory leak and consume RAM until someone intervenes. We have about 140 servers. About 22 over the past two months about 20 showed this behavior. Only when it uses about 2GB of the 10GB we allocated to these servers do I usually have to intervene. That has been about 3 times over the past 2 months and ten over the past year. Using Yaketystats to see the trend was far worse than I had noticed, I decided we needed to do one of two things. 1. Shut it down. Start them when we need them. Shut them down again when we do not. Benefit is we do not have to worry about them getting out of control consuming resources. Unfortunately those wanting to push out updates will have to add a step to start them before pushing them. 2. Recycle. Routinely shut them down and start back. Relatively easy to automate, so set it and forget it. Recycle Well, it gets much worse. First, running the commands work inconsistently. For example, I ran Set-Service TrustedInstaller -startuptype “Automatic” against every host in a development system. As is my habit, I ran a check to make sure it worked. It did on two of the five. So I ran it again. The other three were fixed. So I did that same process on another development system with five hosts. Three of the five worked the first time and the other two the second. The pattern held true for another three systems all with five servers each. Setting the startuptype to Manual worked the same inconsistent way. My check: Get-WmiObject -ComputerName$computer win32_service -Filter “name = ‘trustedinstaller'”

Second, stopping and starting them does not appear to stick. Several minutes after I have stopped all of the services they appear to back in the prior state. Those who were not running stay not. Those who were running are again. And if I start all of them, then at some point those who were not running stop again.

Guess I have a lot of research ahead of me. 🙁