Thursday, May 22, 2008

Testing Remote Data Services with FlexUnit

This is a short example that describes an approach on testing a Flex remote data access code with FlexUnit. The access to the remote services in Flex is provided via HTTPService, WebService or RemoteObject classes. The requests to the data services are handled asynchronously. That adds a bit more complexity to the unit testing. In this example, I will share a design idea on how to test a client side of the remote data services. This is something I came up with working on my "pet" project.


The "pet" project is a Web application that is based on a three tier architecture: an SQL database, a middleware part implemented with Java, Hibernate, and BlazeDS, and a client tier implemented with Flex. The example shown here works on a client side and tests access to the server-side part of the application. The tests covered in this example are designed to test a user login operation.


The login operation is defined on the server-side class called UserService.


public class UserService {

/**
* User login operation.
*
* @param username
* the user name.
* @param password
* the password.
* @return a User object or null if username is not found or password is not
* valid.
*/
public User login(String username, String password) {
User user = null;
Session session = HibernateUtil.getSessionFactory().openSession();
Transaction tx = session.beginTransaction();
List users = session.createQuery(
"from User user where user.username = ?")
.setString(0, username).list();
if (users.size() == 1) {
user = (User) users.get(0);
}

if (user != null && user.isPasswordValid(password)) {
Hibernate.initialize(user);
// trying to fetch the lazy loaded items
user.getProjects().size();
for (Project p : user.getProjects()) {
p.getTasks().size();
}
// return user
} else {
// return null
user = null;
}

tx.commit();
session.close();

return user;
}
}

The UserService service is configured as a BlazeDS destination in remoting-confix.xml:


<?xml version="1.0" encoding="UTF-8"?>
<service id="remoting-service"
class="flex.messaging.services.RemotingService">
<adapters>

<adapter-definition id="java-object"
class="flex.messaging.services.remoting.adapters.JavaAdapter"
default="true"/>
</adapters>
<default-channels>
<channel ref="my-amf"/>
</default-channels>
<destination id="user">
<properties>
<source>org.blazedspractice.model.UserService</source>
<scope>request</scope>
</properties>
</destination>
</service>



The FlexUnit test code shown below is invoking the UserService login method through a RemoteObject class and validates the response. We will review just a couple of test cases: a positive one, i.e. a valid user ID/password combination and a negative, that tests a login operation with an invalid password. The database is initialized with a test user account before the test run. Here is the code:


package org.blazedspractice.model
{
import flash.events.Event;
import flash.events.TimerEvent;
import flash.utils.Timer;

import flexunit.framework.TestCase;
import flexunit.framework.TestSuite;

import mx.rpc.events.FaultEvent;
import mx.rpc.events.ResultEvent;
import mx.rpc.remoting.RemoteObject;

public class UserTest extends TestCase {
private static const LOGIN_NAME:String = "user1";
private static const LOGIN_PSW:String = "password";
private static const INVALID_LOGIN_PSW:String = "invalid_password";

private var user:User;
private var fault:Boolean = Boolean(false);
private var resultCheckTimer:Timer;

// The timer and result check timeouts
private static const TIMEOUT_MS:int = 3000;
private static const RESULT_CHECK_TIMEOUT_MS:int = 3500;

/**
* The test case constructor.
* @param The name of the test method to be called in the test run.
*/
public function UserTest(methodName:String) {
super( methodName );
}

/**
* Builds a test suite.
*/
public static function suite():TestSuite {
var ts:TestSuite = new TestSuite();
ts.addTest( new UserTest( "testLogin" ) );
ts.addTest( new UserTest( "testLoginNegative" ) );
return ts;
}

/**
* Setup the test. This method is executed before every testXXX() by the framework.
*/
override public function setUp() : void {
user = null;
fault = Boolean(false);
}

/**
* This test case validates login operation. The login name and password are valid.
*/
public function testLogin():void {
// Create a remote object
var userService:RemoteObject = new RemoteObject();
// This is a name of the destination configured in BlazeDS settings
userService.destination = "user";
// Add result and fault event listeners as asynchronous checkpoints
userService.login.addEventListener("result", handleLoginResponse);
userService.addEventListener("fault", faultHandler);
// Create a timer that will validate a result of the login operation.
resultCheckTimer = new Timer(1);
resultCheckTimer.delay = TIMEOUT_MS;
resultCheckTimer.addEventListener(TimerEvent.TIMER, addAsync(loginCheck, RESULT_CHECK_TIMEOUT_MS));
resultCheckTimer.start();
// Call the login method.
userService.login(LOGIN_NAME, LOGIN_PSW);
}

/**
* This method handles login response event. It is invoked by the
* Flex framework once the server side data service returns a response to
* the login request.
*/
private function handleLoginResponse(event:ResultEvent):void {
user = event.result as User;
trace("user: " + user);
}

private function faultHandler (event:FaultEvent):void {
fault = Boolean(true);
fail(event.fault.faultString);
}

/**
* Validate a positive test case.
*/
private function loginCheck(event:Event):void {
resultCheckTimer.reset();
trace("loginCheck: " + user);
if (fault == Boolean(true)) {
fail("login failed");
}
assertNotNull(user);
assertNotUndefined(user);
}

/**
* The negative test case. The login password is invalid.
*/
public function testLoginNegative():void {
var userService:RemoteObject = new RemoteObject();
userService.destination = "user";
userService.login.addEventListener("result", handleLoginResponse);
userService.addEventListener("fault", faultHandler);

resultCheckTimer = new Timer(1);
resultCheckTimer.delay = TIMEOUT_MS;
resultCheckTimer.addEventListener(TimerEvent.TIMER, addAsync(loginFailureCheck, RESULT_CHECK_TIMEOUT_MS));
resultCheckTimer.start();

userService.login(LOGIN_NAME, INVALID_LOGIN_PSW);
}

/**
* Validate a negative test case.
*/
private function loginFailureCheck(event:Event):void {
resultCheckTimer.reset();
trace("loginCheck: " + user);
if (fault == Boolean(true)) {
fail("login failed");
}
assertNull(user);
}
}
}

The comments in the code explain the low level details of the test. The FlexUnit and Flex SDK API documentation will cover the rest. I will explain why do we need a timer here and what is addAsync for. :) That is the key to the solution.


The call to userService.login(LOGIN_NAME, LOGIN_PSW); will submit a request to the remote data service and return immediately, since it is an asynchronous call. The response from the userService.login() call will be handled as an event. The call to the service could be successful or could fail due to a system error, for example the network connection is not available or a service is not configured properly, etc. These cases are handled as "result" and "fault:" event types:


userService.login.addEventListener("result", handleLoginResponse);
userService.addEventListener("fault", faultHandler);

The faultHandler() method will be invoked by the Flex SDK framework in case of a system error while calling the data service. The handleLoginResponse() method will be invoked when the response is received successfully.


The test should validate the response and assert expected values based on the test scenario. It should also fail in case of a system error.


The FlexUnit invokes just the methods starting with "test" prefix. Usually a test will be considered to be complete as soon as the testXXX() method is complete. However, we need to validate the results of the login operation that could be available in a few milliseconds after the testXXX() is done. To handle that case, FlexUnit provids a method that is called addAsync. The addAsync is defined in TestCase.as class of the FlexUnit framework:


/**
* Add an asynchronous check point to the test.
* This method will return an event handler function.
*
* @param func the Function to execute when things have been handled
* @param timeout if the function isn't called within this time the test is considered a failure
* @param passThroughData data that will be passed to your function (only if non-null) as the 2nd argument
*
@param failFunc a Function that will be called if the asynchronous
function fails to execute, useful if perhaps the failure to
* execute was intentional or if you want a specific failure message
* @return the Function that can be used as an event listener
*/

public function addAsync(func : Function, timeout : int,
passThroughData : Object = null, failFunc : Function = null) : Function
{
if (asyncTestHelper == null)
{
asyncTestHelper = new AsyncTestHelper(this, testResult);
}
asyncMethods.push({func: func, timeout: timeout, extraData: passThroughData, failFunc: failFunc});
return asyncTestHelper.handleEvent;
}

Basically addAsync adds a delayed check point to the test validation. Internally, the AsyncTestHelper class schedules a timer to do that.


Why does our test need a timer too? I guess, we could simply wrap our event handlers with the addAsync() like this:


 userService.login.addEventListener("result", addAsync(handleLoginResponse, RESULT_CHECK_TIMEOUT_MS));
userService.addEventListener("fault", addAsync(faultHandler, RESULT_CHECK_TIMEOUT_MS));

Well, that will work for the handleLoginResponse case, but will create a problem for faultHandler. In case of a successful response from the service, the timer scheduled internally by addAsync will time out, and the test will fail with an exception saying that method faultHandler() was never called. I suppose, I coudl cancel one of the timers if I get either one of the async points. However, I don't have an access to those timers and I don't want to make the tests to be too dependent on the internal implementation of the FlexUnit. Therefore, I introduced a local timer that invokes a method in a RESULT_CHECK_TIMEOUT_MS time and validates results set by data service result handlers. For example, testLogin() method schedules a call to loginCheck() method. The loginCheck() validates class variables called fault and user that are initialized in the data service result handlers handleLoginResponse() and faultHandler(). This is the basic idea.


The test runner is based on one I described in the earlier post. Here is its source code:


<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns="*"
xmlns:flexunit="flexunit.flexui.*"
creationComplete="onCreationComplete()">
<mx:Script>
<![CDATA[
import flexunit.framework.TestSuite;
import flexunit.flexui.TestRunnerBase;
import mx.collections.ArrayCollection;

import org.blazedspractice.model.UserTest;

[Bindable]
public var testClients:ArrayCollection;

public var NUMBER_OF_TESTS:int = 1;

private function onCreationComplete():void
{
var clients:Array = new Array();
var i:int;
for (i = 0; i < NUMBER_OF_TESTS; i++) {
clients.push("test"+i);
}
testClients = new ArrayCollection(clients);
startTests();
}

// Creates the test suite to run
private function createSuite():TestSuite {
var ts:TestSuite = new TestSuite();

ts.addTest( UserTest.suite() );

return ts;
}

private function startTests():void {
trace(" elements: " + testRepeater.numChildren);
var tests:Array = this.test as Array;
for each (var testRunner:TestRunnerBase in tests) {
testRunner.test = createSuite();
testRunner.startTest();
}
}

]]>
</mx:Script>

<mx:Button name="Run" label="Start Tests" click="startTests()" />

<mx:Panel layout="vertical" width="100%" height="100%">
<mx:Repeater id="testRepeater" dataProvider="{testClients}">
<flexunit:TestRunnerBase id="test" width="100%" height="100%" />
</mx:Repeater>
</mx:Panel>
</mx:Application>

Since the test runner is a Flex application, the build and deployment is the same as for the main application. I actually deply it together with the main application. Here is my ant target that creates html wrappers for the application itself and the test runner:


  <target name="compile.flex" depends="init, compile.flex.components, compile.flex.mxml, compile.flex.tests">
<html-wrapper title="${APP_TITLE}" file="index.html" application="app"
swf="${module}" version-major="9" version-minor="0" version-revision="0"
width="90%" height="100%" history="true" template="express-installation"
output="${build.dir}/${ant.project.name}/" />
<html-wrapper title="${APP_TITLE}" file="test.html" application="testapp"
swf="TestRunner" version-major="9" version-minor="0" version-revision="0"
width="90%" height="100%" history="true" template="express-installation"
output="${build.dir}/${ant.project.name}/" />
</target>

To run the tests, I simply navigate to the URL with the test.html, in my case it is
http://localhost:8400/blazeds_practice2/test.html. The tests are run automatically on the page creationComple event. The "Start Tests" button can be used to run them again. Here are the run results:



I hope this article will be helpful. I am sure this approach can be improved. The code I posted here does not cover all the conditions and most probably contains a few errors. I do not recommend using it as is in production. The intent was to share an idea and get some feedback. I am sure the approach described here is just one of the possible ways. I would be interested to learn more about testing asynchronous code and open to your comments and suggestions.

Tuesday, May 20, 2008

Building a Flex project with Ant

Here is a quick sample on how to build a simple Flex
"hello world" project with Ant.

The "hello world" project contains a src folder with one Flex application file and an Ant build.xml file:

./project_home/
./src/
app.mxml
build.xml

The app.mxml is the main module of the project. It simply has a label with a "Hello World!" text:

<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
<mx:Label text="Hello World!"/>
</mx:Application>
Here is the build.xml source:
<?xml version="1.0"?>
<project name="fx_practice7" default="all">

<!-- Init the build process -->
<target name="init" unless="initialized">
<!-- Name of project and version -->
<property name="FLEX_HOME" location="/Users/mykola/java/flex"/>

<property name="proj.name" value="${ant.project.name}" />
<property name="proj.shortname" value="${ant.project.name}" />
<property name="version.major" value="0" />
<property name="version.minor" value="9" />
<property name="version.revision" value="0" />
<property name="APP_TITLE" value="Sample Application" />
<property name="APP_WIDTH" value="800" />
<property name="APP_HEIGHT" value="600" />

<!-- Global properties for this build -->
<property name="build.dir" location="${basedir}/build" />
<property name="flex_src" location="${basedir}/src" />

<path id="project.classpath">
<pathelement path="${java.class.path}" />
</path>

<taskdef resource="flexTasks.tasks"
classpath="${FLEX_HOME}/ant/lib/flexTasks.jar" />

<echoproperties/>

<property name="initialized" value="true" />

<mkdir dir="${build.dir}" />
</target>

<!-- Default target: clean and build the application -->
<target name="all" depends="init">
<antcall target="clean" />
<antcall target="build" />
</target>

<!-- Compile Flex files -->
<target name="compile.flex" depends="init">
<property name="module"
value="${ant.project.name}"
description="The name of the application module." />

<mxmlc file="${flex_src}/${module}.mxml"
keep-generated-actionscript="true"
output="${build.dir}/${ant.project.name}/${module}.swf"
actionscript-file-encoding="UTF-8"
incremental="true"
context-root="${ant.project.name}"
debug="true">
<load-config filename="${FLEX_HOME}/frameworks/flex-config.xml" />
<source-path path-element="${FLEX_HOME}/frameworks" />
<compiler.source-path path-element="${flex_src}" />
</mxmlc>

<html-wrapper title="${APP_TITLE}"
file="index.html"
application="app"
swf="${module}"
width="${APP_WIDTH}"
height="${APP_HEIGHT}"
version-major="${version.major}"
version-minor="${version.minor}"
version-revision="${version.revision}"
history="true"
template="express-installation"
output="${build.dir}/${ant.project.name}/" />

</target>

<!-- Build the application -->
<target name="build" depends="init">
<antcall target="compile.flex" />
</target>

<!-- Clean build files -->
<target name="clean" depends="init">
<delete dir="${basedir}/generated" />
<delete dir="${build.dir}" />
</target>

<target name="usage" description="Usage documentation">
<echo>
all - clean and build the project
</echo>
</target>
</project>
NOTE: Please update FLEX_HOME and other properties in the build.xml as required for your environment.

To do the build, simply run ant in the project folder. The build output will be stored in the project/build/ folder. To see the result, open index.html in a browser. The index.html is located in the project/build/project_home/ folder.

Monday, May 19, 2008

Using FlexUnit for Stress Testing

I saw quite a few questions in the forums on how to stress
test a Flex application. I thought about it and came up with an idea
that I want to share here.




I think FlexUnit can be used for stress testing. It is not
that difficult. I simply add multiple test runners for each client
application and run all of them asynchronously. Here is the example of
the FlexUnit runner:




<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns="*"
xmlns:flexunit="flexunit.flexui.*"
creationComplete="onCreationComplete()">

<mx:Script>
<![CDATA[
import flexunit.framework.TestSuite;
import test.TemperatureConverterTest;
import test.ArrayUtilTest;
import mx.collections.ArrayCollection;
import flexunit.flexui.TestRunnerBase;

[Bindable]
public var testClients:ArrayCollection;

public var NUMBER_OF_TESTS:int = 100;

private function onCreationComplete():void
{
var clients:Array = new Array();
var i:int;
for (i = 0; i < NUMBER_OF_TESTS; i++) {
clients.push("test"+i);
}
testClients = new ArrayCollection(clients);
}

// Creates the test suite to run
private function createSuite():TestSuite {
var ts:TestSuite = new TestSuite();

ts.addTest( TemperatureConverterTest.suite() );
ts.addTest( ArrayUtilTest.suite() );

return ts;
}

private function startTests():void {
trace(" elements: " + testRepeater.numChildren);
var tests:Array = this.test as Array;
for each (var testRunner:TestRunnerBase in tests) {
testRunner.test = createSuite();
testRunner.startTest();
}
}

]]>
</mx:Script>

<mx:Button name="Run" label="Start Tests" click="startTests()" />

<mx:Panel layout="vertical" width="100%">
<mx:Repeater id="testRepeater" dataProvider="{testClients}">
<flexunit:TestRunnerBase id="test" width="100%" height="100%" />
</mx:Repeater>

</mx:Panel>
</mx:Application>






Here is a screen shot of the test with 100 test runners:





So, I guess once you have a client application that runs
hundreds of tests in parallel, it is easy to launch it on several
instances of the browser, or even on several PCs. All you need to do is
to add startTests() call to onCreationComplete() as a last
line of the method, deploy the app and simply open the URL in the
browsers.


The other nice thing about it is that this test client can be
used for profiling purposes. FlexBuilder has a very nice profiler. I
found it very helpful in testing my BlazeDS application.

Friday, April 25, 2008

Java and Flex Builder 3


Flex Builder 3 is distributed as an Eclipse plugin and as a standalone IDE. The standalone version does not support Java development out of the box. These couple of blogs helped me to add Java to the standalone version.




Saturday, March 15, 2008

Setting a default application for Flex SWF files on Mac


I found a small problem with opening SWF files on Mac. I am using Mac OS X Tiger version at the moment. I installed Flex SDK 3 and created a sample project in Eclipse with an Ant build file that compiles my Flex msml code to SWF files. I've noticed that generated SWF files are associated with RealPlayer, however it does not really play them.






I can select these files in Firefox just fine. But, the bigger problem is when I use a command line Flex debugging tool called fdb. It opens the RealPlayer for me, and I cannot really choose a player in the fdb tool.




Adobe fdb (Flash Player Debugger) [build 814]
Copyright (c) 2004-2007 Adobe, Inc. All rights reserved.
(fdb)
(fdb) run file:///Users/mykola/progs/workspace2/fx_practice3/build/fx_practice3.swf
Attempting to launch and connect to Player using URL
file:///Users/mykola/progs/workspace2/fx_practice3/build/fx_practice3.swf



Here is how to fix it.

  1. Open Finder and locate the file with swf extension.
  2. Right click and select Get Info.
  3. In the Get Info dialog, select Open with: Other ...
  4. In the Choose Other Application dialog, set Enable: All Applications and select Firefox or other application you like to play SWF files.
  5. Click Add. The selected application will be displayed in the Get Info dialog.
  6. The last step is click on Change All... to set this player application for all SWF files.
Done! :)

Monday, February 18, 2008

How to debug SOAP on Mac with tcpdump

I’ve been using several tools to debug SOAP on Mac and Windows. I would like to share my experience in using some of the tools and show some examples.

So far I found four major categories of tools that are useful in debugging SOAP:
  1. Interface listeners such as tcpdump and others
  2. Proxy tools such as TCPMonitor
  3. Servlet filters
  4. Application server logging. For example, Weblogic has a special logging option that dumps SOAP requests and responses to the sever log.
All of these tools have their own advantages and disadvantages. In this article, I will describe the first type - an interface listener tool called tcpdump. I am planning to describe other three categories in the future posts.

The tcpdump is a tool that sniffs IP traffic and dumps it to a file or a standard output stream. It was originally developed by Van Jacobson, Craig Leres and Steven McCanne from the Lawrence Berkeley National Laboratory, University of California, Berkeley, CA. It is open source. The tcpdump documentation and source code is available at http://www.tcpdump.org/. The tcpdump has been ported to many platforms. The version for Mac comes with MacPorts ( http://www.macports.org/).

Here is an example of capturing SOAP traffic where the service is deployed on the local host:

$ sudo tcpdump -i lo0 -A -s 1024 -l 'dst host localhost and port 8080' | tee dump.log

I use sudo because tcpdump requires root privileges to run. In this example, the tcpdump listens to the loopback interface defined on my Mac as lo0 and pipes out the data to dump.log file. To see all available interfaces, run ifconfig or tcpdump with –D option.

The Web service I use in this example is deployed on an instance of Glassfish Open Source Application Server for J2EE 5 (https://glassfish.dev.java.net/). It has a couple of methods:


/**
* Coffee Shop service.
*
* @author mykola
*/
@WebService()
public class CoffeeShop {
@WebMethod
public String getName() {
return "CoffeeShop, Inc.";
}

@WebMethod
@WebResult(name="price")
public int placeOrder(
@WebParam(name="product")
String product) {
int price = -1;
if ("coffee".equalsIgnoreCase(product)) {
price = 2;
} else if ("cake".equalsIgnoreCase(product)) {
price = 5;
}
return price;
}
}


I created a client application and once I call getName() Web service method I’ve got these SOAP request and response in the dump.log file:


Request:



Response:


The tcpdump logs lots of data. For SOAP debugging, we are interested in the <S:Envelope> content and HTTP headers.

This is a quick and dirty way of getting raw SOAP on the wire. The advantage of this approach is that it is not intrusive, i.e. no special client or service configuration is required. It does not affect performance of the web service or the client. It logs every byte in the message, even control symbols, which could be a disadvantage in some cases. The tcpdump would be very hard to use for debugging SOAP over HTTPS. Also, this approach is hard to use in cases where automated SOAP validation is required. I use filters for that. This is a topic for a future post.

I hope this post was helpful. Please let me know if you have any questions or comments. Thank you for reading.


Friday, February 15, 2008

Deploying BlazeDS to Weblogic 10.0

The BlazeDS is a free and open source J2EE application that provides data services to Adobe Flex and AIR applications. I’ve deployed it to Weblogic 10.0 server. I thought somebody else would be interested to do the same. Here is how.

In case you need to install Weblogic 10.0, the developers copy is available for free at http://commerce.bea.com/products/weblogicplatform/weblogic_prod_fam.jsp.

I have it installed on my Windows XP at D:\bea10 folder.

I have created a new domain for my BlazeDS applications. It is easy to do, see http://edocs.bea.com/common/docs100/confgwiz/index.html for details.

In my case, the domain name is blazeds and it is located in D:\bea10\user_projects\domains\blazeds folder on my computer. The domain name and the folders names can be difference, I just mentioned them so it is easier to follow the examples..

Once the Weblogic server is installed and a domain is ready, the BlazeDS applications can be deployed.

A copy of BlazeDS is available at http://labs.adobe.com/technologies/blazeds/. I downloaded the latest version - blazeds_b1_020108.zip and unzipped it to D:\java\blazeds_b1_020108.

There are three WAR files provided in BlazeDS distribution: blazeds.war, ds-console.war, and samples.war. Copy them to autodeploy folder of your Weblogic domain. In my case it is D:\bea10\user_projects\domains\blazeds\autodeploy.

Start the Weblogic servert. If you run it on Windows, simply click on Start -> All Programms -> BEA Products -> User Projects –> blazeds (or your domain name) -> Start Admin Server.

If everything ok, the Weblogic server log window will appear on the desktop:


Now is a good time to verify that the BlazeDS applications were deployed successfully. It is easy to do by checking the status of the applications in the Weblogic Admin console and scanning the log files for errors.

My server is configured with HTTP port 7001. My Weblogic console URL is http://localhost:7001/console. The console application asks for the admin’s name and password. Just want to remind that they were specified during the domain configuration step. Once you logged in, select Deployments in the Domain Structures window. You should see these applications deployed

I also like to check the log files for errors. The BlazeDS log file is created in

D:\bea10\user_projects\domains\blazeds\servers\AdminServer\logs folder and it is called blazeds.log. I could not find any errors in the log, which is a good sign.

Now the BlazeDS is ready. Open examples URL: http://localhost:7001/samples/. You should see a page

The BlazeDS samples use the database that is provided in the BlazeDS distribution. To start the database, open a command prompt window, navigate to \blazeds_b1_020108\sampledb folder and run startdb.bat:

Now lets test drive the BlazeDS. Navigate to the BlazeDS Test Drive page

http://localhost:7001/samples/testdrive.htm and follow the sample instructions. All sample applications worked nicely in my case. I’ve got data via HTTPService, Web Services, Java Remoting mechanism just fine.


I verified blazeds.log and AdminServer.log files for errors and could not find any. Everything run just fine.

I hope you found this post helpful. Please submit any comments or questions. I will be glad to help.