Martin Luther King, Jr. : Quotes to Make You Think

Various Martin Luther King, Jr. quotes my Facebook friends posted today. Strangely enough I did not already have any on the Quotes to Make You Think page.

The time is always right to do what is right.

Everybody can be great because everybody can serve.

Faith is taking the first step even when you don’t see the whole staircase.

In the End, we will remember not the words of our enemies, but the silence of our friends.

I refuse to accept the view that mankind is so tragically bound to the starless midnight of racism and war that the bright daybreak of peace and brotherhood can never become a reality… I believe that unarmed truth and unconditional love will have the final word.

A nation that continues year after year to spend more money on military defense than on programs of social uplift is approaching spiritual doom. (from Where Do We Go from Here: Chaos or Community?, 1967.)

I have decided to stick with love. Hate is too great a burden to bear.

The ultimate measure of a man is not where he stands in moments of comfort and convenience, but where he stands at times of challenge and controversy.

Some others:

Man was born into barbarism when killing his fellow man was a normal condition of existence. He became endowed with a conscience. And he has now reached the day when violence toward another human being must become as abhorrent as eating another’s flesh. (from Why We Can’t Wait, 1963.)

All men are caught in an inescapable network of mutuality.

Like an unchecked cancer, hate corrodes the personality and eats away its vital unity. Hate destroys a man’s sense of values and his objectivity. It causes him to describe the beautiful as ugly and the ugly as beautiful, and to confuse the true with the false and the false with the true.

Of course, the The Vision of Race Unity: America’s Most Challenging Issue seems very applicable here. Dr. King’s views and that of the Baha’i Faith seem very much in sync.

Useful User Agents

Rather than depend on end users to accurately report the browser used, I look for the user-agent in the web server logs. (Yes, I know it can be spoofed. Power users would be trying different things to resolve their own issues not coming to us.)

Followers of this blog may recall I changed the Weblogic config.xml to record user agents to the webserver.log.

One trick I use is the double quotes in awk to identify just the user agent. This information is then sorting by name to count (uniq -c) how many of each is present. Finally, I sort again by number with the largest at the top to see which are the most common.

grep <term> webserver.log | awk -F\” ‘{print $2}’ | sort | uniq -c | sort -n -r

This is what I will use looking for a specific user. If I am looking at a wider range, such as the user age for hits on a page, then I probably will use the head command to look at the top 20.

A “feature” of this is getting the build (Firefox 3.011) rather than just the version (Firefox 3). For getting the version, I tend to use something more like this to count the found version out of the log.

grep <term> webserver.log | awk -F\” ‘{print $2}’ | grep -c ‘<version>’

I have yet to see many CE/Vista URIs with the names of web browsers. So these are the most common versions one would likely find (what to grep – name – notes):

  1. MSIE # – Microsoft Internet Explorer – I’ve seen 5 through 8 in the last few months.
  2. Firefox # – Mozilla Firefox – I’ve seen 2 through 3.5. There is enough difference between 3 and 3.5 (also 2 and 2.5) I would count them separately.
  3. Safari – Apple/WebKit – In searching for this one, I would add to the search a ‘grep -v Chrome’ or to eliminate Google Chrome user agents.
  4. Chrome # – Google Chrome – Only versions 1 and 2.

Naturally there many, many others. It surprised me to see iPhone and Android on the list.

Weblogic Diagnostics

I noticed one the nodes in a development cluster was down. So I started it again. The second start failed, so I ended up looking at logs to figure out why. The error in the WebCTServer.000000000.log said:

weblogic.diagnostics.lifecycle.DiagnosticComponentLifecycleException: weblogic.store.PersistentStoreException: java.io.IOException: [Store:280036]Missing the file store file “WLS_DIAGNOSTICS000001.DAT” in the directory “$VISTAHOME/./servers/$NODENAME/data/store/diagnostics”

So I looked to see if the file was there. It wasn’t.

I tried touching a file at the right location and starting it. Another failed start with a new error:

There was an error while reading from the log file.

So I tried copying to WLS_DIAGNOSTICS000002.DAT to WLS_DIAGNOSTICS000001.DAT and starting again. This got me a successful startup. Examination of the WLS files revealed the the 0 and 1 files have updated time stamps while the 2 file hasn’t changed since the first occurance of the error.

That suggests to me Weblogic is unaware of the 2 file and only aware of the 0 and 1 files. Weird.

At least I tricked the software into running again.

Some interesting discussion about these files.

  1. Apparently I could have just renamed the files. CONFIRMED
  2. The files capture JDBC diagnostic data. Maybe I need to look at the JDBC pool settings. DONE (See comment below)
  3. Apparently these files grow and add a new file when it reaches 2GB. Sounds to me like we should purge these files like we do logs. CONFIRMED
  4. There was a bug in a similar version causing these to be on by default.

Guess that gives me some work for tomorrow.
🙁

Email Harvesters

Good Sign I missed the story about brothers convicted of harvesting emails the first time. Well, I noticed a followup.

Back around 2001, the CIO received complaints about performance for the web server. So, I went log trolling to see what the web server was doing. A single IP dominated the HTTP requests. This one IP passed various last names into the email directory. Some quick research revealed Apache could block requests from that IP. That calmed things down enough for me to identify the owner of the IP. The CIO then bullied the ISP to provide contact information for the company involved.

Previous little adventures like this landed me a permanent job, so I jumped at similar challenges.

Well, a few years later, it happened again. This time my boss had made me develop a script for the dissemination of the anti-virus software package to home users. Basically, it used email authentication for verification if someone could get the download link. So, I applied the same technique to the email directory. Well, this upset some people who legitimately needed email addresses. So the human workers would provide email addresses to people with a legitimate need.

I’m glad since I’ve left, VSU no longer looks up email addresses for people. (I thought some of the requests questionable.) Also, my little email authentication script was before LDAP was available to the university. I think the new solution much better.

One the more vocal complainers about my having stopped non-VSU access to the email directory was my current employer. We apparently list email addresses for employees freely. Which makes me wonder how much spam we get is due to the brothers described at the beginning of this story? Or other email harvesters? Just hitting the send button potentially exposes the email address.

No worries. I’m sure Glenn is protecting me. 🙂

Preserving CE/Vista Settings

I’ve been asked for notes about this a few times. So here’s a blog post instead.
🙂

A coworker is working on scripting our updates. We lost the Luminis Message Adapter settings in applying the patch to the environment we provide to our clients. Fortunately, those settings are maintained by us not our clients. So I pushed those settings back very easily. Unfortunately, it points to the need to capture the settings for the potential purpose of restoring the settings.

In Oracle databases, this is pretty easy. As the schema user, run the following. It does some intentional things. First, we have multiple institutions, so the breaks make identifying which institution easier. Second, the same label for multiple forms gets confusing, so I am sorting by setting description id under the theory these ids are generated at the time the page is created, so the same tools will float together. (The last modified time stamp is probably unnecessary, I used it in an earlier version and left it just in case Vista for whatever reason added a new setting for the same label instead of modifying the existing one.) This can be spooled both before and after the upgrade. Use diff or WinMerge to compare the versions. Anything lost from the before version should be evaluated for inclusion adding back to the settings.

col lc_name format a50
col setting_value format a80
col label format a80
col lock format 999
col child format 999

clear breaks computes
break on lc_name skip 1

select learning_context.name lc_name, settings_description.label, settings.setting_value,
settings.locked_flag “lock”, settings_description.inheritable_flag “child”
from learning_context, settings, settings_description
where settings.settings_desc_id = settings_description.id
and settings.learning_context_id = learning_context.id
and learning_context.type_code in (‘Server’,’Domain’, ‘Institution’,’Campus’,’Group’)
order by learning_context.name, settings.settings_desc_id
/

An example of the multiple forms issue is external authentication. CE/Vista provides an LDAP (A) and an LDAP (B). The settings_description.label for both is contextmgt.settings.ldap.source. The settings_description.name for both is source. It looks like each of the two identical labels has a different settings.settings_desc_id value depending on whether it is A or B. To me it seems lame to use the same label for two different ids.

The most vulnerable parts of the application to lose settings during an update are the System Integration settings. A mismatched Jar on a node will wipe all the settings associated with that Jar.

However, I can see using this to capture the settings as a backup just in case an administrator or instructor wipes out settings by mistake. Yes, this is scope creep. Create a backup of the settings table to actually preserve the settings.

create table settings_backup_pre_sp2hf1 tablespace WEBCT_DATA as select * from settings;

Contexts: As a server admin, I maintain certain settings and push those down. Each client has control over some other settings and may push those down from the institution context. Maybe some are creating division and group admins? Maybe some instructors are changing things at the course or section levels. I may end up capturing everything?

Restoration: The whole purpose of preserving the settings is to restore them later. There are a couple methods in theory:

  1. Providing the settings to a human to re-enter. The labelling issue makes me question the sanity of trying to explain this to someone.
  2. Update the database directly would just need settings.id ensure it is the right location. Maybe dump out the settings in the format of an update command with labels on each to explain the context? Ugh.

If settings were not so easily lost, then this would be so much easier.

View: Another table of interest is the settings_v view. (Redundant?) The only reason I don’t like this view is it reports the values for every learning context which makes reporting off it much, much longer. For example, the encryption key for a powerlink is listed 8 places in settings/settings_description and 18,769 places in settings_v.

Better CE/Vista Web Server Log

Some support tickets are more easily solved by knowing both user behavior and environment. An often helpful piece of information is what web browser they used. To add this, shut down the cluster, edit /VISTA_HOME/config/config.xml to include the cs(User-Agent), and start the cluster. This line will need to appear for every node. At startup, the nodes will download a new copy of the file.

<elf-fields>date time time-taken c-ip x-weblogic.servlet.logging.ELFWebCTSession sc-status cs-method cs-uri-stem cs-uri-query bytes cs(User-Agent) x-weblogic.servlet.logging.E LFWebCTExtras</elf-fields>

Command:
cp config.xml config.xml.bak
sed -s s/bytes x-/bytes cs(User-Agent) x-/g config.xml.bak > config.xml

Probably this could be edited in the Weblogic 9.2 console. I haven’t looked yet.

Mail From Address

It appears CE/Vista has several locations for defining the email addresses it uses for SMTP.

  1. $WEBCTDOMAIN/config/config.xml:
    mail.from=
    From address for messages sent.
  2. $WEBCTDOMAIN/customconfig/startup.properties:
    WEBCT_ADMIN_EMAIL=
    Some internal errors have a mailto: prompt to contact the server administrator.
  3. $WEBCTDOMAIN/serverconfs/log4j.properties:
    log4j.appender.EMail.To=
    Report fatal errors.
  4. $WEBCTDOMAIN/serverconfs/log4jstartup.properties:
    log4j.appender.EMail.To=
    Report fatal errors.
  5. $WEBCTDOMAIN/webctInstalledServer.properties:
    WEBCT_ADMIN_EMAIL=
    Installer picks up this value for populating #2 and possibly #3 and #4.
  6. $WEBCTDOMAIN/webctInstalledServer.properties:
    MAIL_ORIGIN=
    Installer picks up this value for populating #1.

What really disturbs me is the Vista 8 installer created log4j properties files with the  SMTP server set up for miles.webct.com and sending from vista.monitor@webct.com? I cannot seem to find anything in the Vista 8 documentation or wiki or Google index about the “Vista Trap Notification” subject line, from address, or SMTP address which the log4j appender appears to be designed to send.

This Vista Trap Notification appears designed to send an email to the address any time a fatal error is encountered. That’s fine. Just use the smtp host and From address requested in the installer.

Don’t get me started about giving end users a mailto: prompt to report errors.

Most Wired Teacher

“Who is the most wired teacher at your college?” (A Wired Way to Rate Professors—and to Connect Teachers)

Although the university runs workshops on how to use Blackboard, many professors are reluctant, or too busy, to sit through training sessions. Most would prefer to ask a colleague down the hall for help, said Mr. Fritz.

Professional support is too intimidating, cold, careless. Support fixes the problems of others who created problems for themselves:

  • choices made in software to use
  • configuration choices
  • mistakes logic in processing

The concept of identifying the professors who most use the system is a good one. We already track the amount of activity per college or university in the University System of Georgia. The amount of data (think hundreds of millions of rows across several several tables)  would make singling out the professors a very long running query. Doesn’t mean it is a bad idea. Just don’t think it is something we would do with Vista 3. We probably could with Vista 8 which uses a clean database.

I’d like to see two numbers:

  1. Number of actions by the professor
  2. Number of actions by the all classes the professor teaches

Ah, well, there are lots of other reports which need to be done. Many more important than this one. 

Some questions from the article: “Will colleges begin to use technology to help them measure teaching? And should they?” At present, to create such reports, IT staff with database reporting or web server skills are needed. Alternatively, additonal applications like Blackboard Outcomes System can provide the data. The real problem is the reliability and validity of the data. Can it really be trusted to make important decisions like which programs or employees are effective.

Rock Eagle Debrief

GeorgiaVIEW

  1. SMART (Section Migration Archive and Restore Tool) created for us by the Georgia Digital Innovation Group seemed well received. I’m glad. DIG worked tirelessly on it on an absurdly short schedule.
  2. Information is strewn about in too many places. There isn’t one place to go for information. Instead between Blackboard, VistaSWAT, and GeorgiaVIEW about 29. I amazed I do find information.
  3. Blackboard NG 9 is too tempting for some.
  4. Vista does DTD valdiation but not very well. We need to XML validation before our XML files are run. As we do not control the source of these files and errors by those creating the files cause problems, we run them in test before running in production. I am thinking of something along the lines of validating the file and finding the errors and reporting to the submitter the problems in the file. Also, it should do XML schema validation so we can ensure the data is as correct as possible before we load it.
Yaketystats
  1. If you run *nix servers, then you need Yaketystats. I have been using it for 2 years. It revolutionized how I go about solving problems. If you are familiar with my Monitoring post, then this is the #2 in that post.
That is all for now. I am sure I will post more later.

Forcing Weblogic’s Config.xml

Let’s nevermind why I am working on this in the first place. Namely…

  1. the Blackboard Learning Environment Connector introduced using the hostname and port for applet URLs in Vista 8 Blackboard,
  2. Blackboard dropped WebCT’s support for using a different port for an application when behind a load balancer.
So we found out we could use port 443 as the SSL listen port because we terminate SSL on the load balancer, Weblogic would not bind to port 443, but the Vista application would be tricked into displaying to the end user what we wish.
In the past week, we have put the correct config.xml in place multiple times and found it reverts back to an older version with the port we don’t want. The first time, I was lazy and did not shut down the Weblogic admin server because… well… that was the lazy practice I had used in Weblogic 8.1 and had not had a problem. My shell record shows it was correct then. Within hours it wasn’t correct anymore.
So, we found a few things…
  1. a copy of the config.xml is stored WEBCTDOMAIN/servers/domain_bak/config_prev/,
  2. all files in WEBCTDOMAIN/config/ are pushed to the nodes,
  3. to change this value in the Weblogic console requires turning on a feature to bind to the SSL listen port.
Additionally, we think research into this would show Weblogic stores this information in memory. It will then write changes it makes to the file back to disk on the admin node (destroying our change). Managed nodes will then pick up the change.
The latest shot at this is to purge the #1 and #2 on both the admin server and managed nodes, put the right file in place on the admin nodes, and see if it reverts again.
So now I’ve got to write a script to periodically check if the nodes have the wrong listen port and email us should it change.