AspAlliance.com LogoASPAlliance: Articles, reviews, and samples for .NET Developers
URL:
http://aspalliance.com/articleViewer.aspx?aId=1314&pId=-1
OOP Design and Practices in Business Systems
page
by Brian Mains
Feedback
Average Rating: This article has not yet been rated.
Views (Total / Last 10 Days): 33425/ 66

Introduction

There are many concepts/design patterns that can be used in a business system. This article is more philosophic into the design ideas and concepts that can be used in application development. 

Data Layer Architecture

There is a lot of question about the correct architecture to use for a data layer.  Should you use strongly-typed datasets?  Should you custom code your DAL?  Should you use Data Transfer Objects to pass the data back and forth?  There are pros/cons to each.  Strongly-typed datasets seem to have a lot of support with developers, mainly because they are easy to use and setup.  They create strongly-typed DataTable/DataRow objects that are all a part of the designer-generated code.

Alternatively, DAL code could return a DataTable, DataSet, or a data transfer object.  This is another source of argument, as to which is the best approach.  Data Transfer Objects require a coding effort for each data source, but they do provide strong typing and some conversion capabilities, similar to the strongly-typed dataset.

DataSet (weakly-typed) and DataTable objects are disconnected representations of the data that are easily and widely usable in application development.  They aren't strongly-typed, meaning that you have to reference them in the collection by name or index of the table/column/cell.  A DataSet is comprised of one or more DataTable classes, and a DataTable is comprised of one or more DataColumn and DataRow classes.

Each of these has their own drawback.  Strongly-typed datasets can have some configuration problems that people experience, especially in determining whether a field can or cannot be null, and how nulls have handled (this may be a minor problem.)  In addition, some people like to have control over their data layer, and because of this like to custom develop it, instead of using the designer approach.  In addition, a strongly-typed dataset returns all of the data for a specific table, which is not always desired.

Data Transfer Objects do require an additional coding effort, which adds to the total time that it takes to develop an application.  It's strongly-typed, which means that it provides an interface for the data source; however, any changes to the data source requires work done to the data transfer object.

DataSets and DataTables are widely used classes since they represent data from a database, can read XML, or can be returned by an XML Web Services.  However, they aren't strongly typed, which means that you have to reference by column name, and thus maintenance of a column name change could be difficult to make.  Another alternative exists, which is the use of a third-party framework, but I'm not going to touch upon that.

When determining your architecture, a strongly-typed dataset is the least amount of work, a data transfer object approach is the most amount of work, and weakly-typed DataSets/DataTables are in-between.  There are other considerations to take into consideration as well.  Will this API be used for a web service?  Will it be used as a data contract in a WCF service?  Will it be used to bind to an ASP.NET web page?  Will these objects control the interaction within the application directly?  What will all of that have to do with the data?  These are all factors you have to take into consideration with your architecture.

Between the data layer to the business layer, some action on the data takes place.  In some situations, the data is passed directly, such as passing a DataTable or strongly-typed dataset directly back through the business layer to the caller.  In other situations, the data layer returns a data object to the business layer, and some conversion takes place.  This is another factor to take into account when designing the back-end architecture.

Generally, it seems that application development systems have been heading for creating a business architecture using domain-driven design and business objects.  The value of an API that controls your application is very popular, especially in the realm of windows applications, such as the Microsoft Office platform.

Is Architecture Needed?

Is all of that architecture needed?  I've always thought so for my larger scale applications and windows-based applications, but for my web applications I didn't think so.  Since studying design patterns and domain-driven design, I'm changing my mind as to how much is really beneficial.  The problem I saw was that business objects took longer to develop, longer to design, and more development time to setup testing.  However, I saw the benefit that they can be tested at all, as the user interface is hard to test with conventional unit testing frameworks (certain testing frameworks are coming out for this).

The additional benefit is that when the application connects to the business layer, the business layer can catch some of the problems with the input data and handle it appropriately, whereas in an ASP.NET page, all data has to be handled within the page and code could possibly be duplicated.  The .NET 2.0 data source control errors that are raised are not always beneficial in explaining what the true problem really is as well.

Because of that, and the unit testing benefit, I've changed my mind to include at least minimal business and data layer coding.  Usually, when a small project, I have a business layer that returns a DataTable object directly to the ASP.NET page, instead of using business objects.  This approach can still make use of the ObjectDataSource that binds directly to a business object and calls a method within it.

In the cases where the project is larger, or often with windows-based applications, I make use of business objects in my applications, having a factory return a single or collection of objects to the caller.  I tend not to prefer to use strongly-typed datasets, but to create the DAL code using the Enterprise Library Data Access Application Block.  The data layer returns a DataTable from the database call to the business layer.  Alternatively, the DataTable/DataSet objects can be used to read/write Xml data, so these objects have dual purpose.  For instance, below is a possible data layer method:

Listing 1

public DataTable GetRoles()
{
  Database database = DatabaseFactory.CreateDatabase(_connectionStringName);
  DbCommand command = database.GetStoredProcCommand(this.GetRolesProcedureName);
  database.AddInParameter(command, "ApplicationName", DbType.String,
    this.ApplicationName);
  DataTable rolesTable = database.ExecuteDataSet(command).Tables[0];
}

The business layer would perform the conversion and return a business collection as such:

Listing 2

public RoleCollection GetRoles()
{
  RoleDataGateway gateway = new RoleDataGateway();
  DataTable rolesTable = gateway.GetRoles();
  RolesCollection collection = new RolesCollection();
  foreach (DataRow roleRow in rolesTable.Rows)
  {
    collection.Add(new Role(roleRow["Name"].ToString(),
      roleRow["Description"].ToString()));
  }
  return collection;
}

ASP.NET can bind business objects; however, complex business objects can make this process complicated.  In Eval statements, you can return a class reference through the eval statement.  For instance, if the property named DbSystem returns a class instance of type Database, this statement must be cast to access sub properties, in the form of:

Listing 3

<%# ((Database)Eval("DbSystem")).Name %>

This can make it complicated in some object-oriented designs, especially with large object chains.  What could be a better alternative is to convert the object to an alternative data source that is a better approach, or to create a data source control that can handle the breaking down of the object to the raw property values it needs.

In ASP.NET, these larger-scale business solutions often use an MVC approach, or use one of the controller patterns such as Page or Front controller.  I've used the MVC approach to one of my applications, and although considerably harder to develop, the ability to unit test pages/user controls and the ability to control more of the interactions through code have provided a key benefit to ensuring the application works correctly.  I hope to write an article on this soon.

Inheritance or Interfaces?

A question often arises about the use of class inheritance (such as an abstract base class) or interfaces, and which solution is better.  There are pros/cons to both.  For instance, interfaces can be used even when a class inherits from another class.  Interfaces are also commonly used in several libraries.  For example, mocking libraries make extensive use of interfaces.  However, because interfaces only define the properties and methods to use and not the implementation, they are harder to implement.  This is a benefit to base classes because they can provide the underlying architecture without you have to write extra code.

For certain problems, however, the answer may be both.  In certain situations, a class is forced to inherit from another class (such as with an ASP.NET control which can implement IWebPart to utilize web part capabilities).  By creating both a base class and an interface, you open up the possibility to include more than one type of object to incorporate this functionality.  So, by having the following interface/class as your base, the both custom objects and existing objects can make use of the change tracking features (to be discussed later).

Listing 4

public abstract interface IDomainEntity
{
  bool IsDirty { get; }
  void ClearDirtyStatus();
  void SetDirty();
}

The base class can implement the interface as such below:

Listing 5

public class DomainEntity: IDomainEntity
{
  private bool _isDirty = false;
 
  #region " Properties "
 
  protected bool IsDirty
  {
    get
    {
      return _isDirty;
    }
  }
 
  bool IDomainEntity.IsDirty
  {
    get
    {
      return this.IsDirty;
    }
  }
 
  #endregion
 
  #region " Methods "
 
  protected void ClearDirtyStatus()
  {
    _isDirty = false;
  }
 
  void IDomainEntity.ClearDirtyStatus()
  {
    this.ClearDirtyStatus();
  }
 
  protected void SetDirty()
  {
    _isDirty = true;
  }
 
  void IDomainEntity.SetDirty()
  {
    this.SetDirty();
  }
 
  #endregion
}

Clearly there will be cases for both situations.

Determining Changes/Errors

When designing object-oriented systems, often the approach to designing the objects is to use domain or business objects containing properties and methods related to the underlying data.  There are various schools of thought about how to do this, and whether to use objects at all, instead of some of the built-in objects to .NET.  Rather than get into that discussion, this section will be using the domain object concept for designing application systems.

An object is created in some way.  For instance, an object is created new, meaning that a new instance of an object is created.  At this point, it would be considered "dirty", meaning it hasn't been uploaded to the underlying data source (database, XML, text file, or some other source).  The data source doesn't know anything about that data, and it shouldn't know anything about the data source (at any time).  Upon saving it to the repository, it would be considered "clean" in that it doesn't contain any changes from what the data source knows it to be.  However, upon changing the properties of an object, this object would be considered "dirty" again because it contains changes from what the underlying data source knows it to be.

What is the means to track changes?  As shown with the previous code example above, IDomainEntity describes an IsDirty property that tracks whether the object is dirty.  The methods change the value of that property internally based on whether any changes were made.  It may seem obvious to make IsDirty writable and assign the value directly, but I like this indirection better.  It makes the actions more clear, to set the dirty value of the property or to clear it.  Making the property writable could mean an accidental mistake of setting it to true when it was meant to be false, which is harder to do with method declarations.  The following is an example of a property:

Listing 6

public string Name
{
  get
  {
    return _name;
  }
  set
  {
    if (_name != value)
    {
      _name = value;
      this.SetDirty();
    }
  }
}

The property above only sets the dirty flag if the name property value changed from what it existed previously.  If you have a repository that is a collection of IDomainEntity objects, then in your repository, it is possible to have a method as below:

Listing 7

public virtual int SaveChanges()
{
  int savedItems = 0;
 
  foreach (T item in this.Items)
  {
    if (item.IsDirty)
    {
      this.SaveItem(item);
      item.ClearDirtyStatus();
      savedItems++;
    }
  }
 
  return savedItems;
}

This method would then call the SaveItem method to perform the actual storage of the item data.  It is possible to take this more in-depth in creating a repository that works with a business object.

Validation

When creating a business object, the object also has to be validated.  The data entered into a form needs to be checked for accuracy and correctness.  What means should we use?  Should the object be stored in the business layer if there is an error anyway?  Where should validation be performed and how?

In most instances, data is entered into a web or windows form and a button is clicked to add the information to the repository.  At this point, the object representing this record has not yet been created, so validation is a perfect time to occur here.  However, some applications require a different approach.  They are very dynamic in that they tie together multiple resources from many places.  For instance, there is a form that contains many fields; it is common to break this up into several forms, or to use a tab control of some sort.  However, some of the information related to this object may only be created after the primary record is saved to the database and the user has had time to review the information.  A better solution would be to save the object, even though in error, to the repository and come back to it later.

For instance, additional properties for the object may only be able to set up after the object is created.  This object would be saved, but it would not be considered correct, through the use of some sort of status property, and any additional changes could be made later.  In the windows world, this is easily taken care of; but in the web world persisting these objects to a data store has to occur often because the web is stateless and knows nothing about the objects created in the previous page lifecycle.  An external means, such as a database or cache, can be used to persist the object and must be weighed against how many people will use the application.  Caching and session mechanism only store the objects for so long, so error handling must be used to ensure that if the object doesn't exist, the application can still work.

Another consideration is the actual validation of the fields that are input.  For instance, you can make use of the built-in validation features within windows and web applications, or you can make use of the Enterprise Library 3 Validation block.  This new application block has the ability to validate business applications through attributes or configuration files established for a business class.  An example of a business object is shown below:

Listing 8

public class User
{
  [StringLengthValidator(5, RangeBoundaryType.Inclusive, 7,
    RangeBoundaryType.Inclusive,
    "The authorization code is outside the range of valid values", Ruleset =
    "primary"), AuthorizationCodeValidator(Ruleset = "primary")]
  public string AuthorizationCode
  {
    get
    {
      return _authorizationCode;
    }
    set
    {
      _authorizationCode = value;
    }
  }
 
  [StringLengthValidator(7, RangeBoundaryType.Inclusive, 150,
    RangeBoundaryType.Inclusive,
    "Email address must be between 7 and 150 characters", Ruleset = "primary"),
    ContainsCharactersValidator("@.", ContainsCharacters.All,
    "The email must have at least an @ and at least one decimal", Ruleset =
    "primary"), EmailDomainValidator(".com"".net"".edu", ".gov"".biz",
    ".tv", Ruleset = "primary")]
  public string Email
  {
    get
    {
      return _email;
    }
    set
    {
      _email = value;
    }
  }
}

The Enterprise Library Validation block also has integration features into ASP.NET with a new PropertyProxyValidator that performs server-side validation using the validation attributes/configuration.  It is also customizable to include additional custom validators for whatever your needs may be.  Below is a definition of that validator:

Listing 9

<el:PropertyProxyValidator ID="ppvName" runat="server" 
      SourceTypeName="Mains.Examples.User,App_Code"
      PropertyName="Name" RulesetName="primary" ControlToValidate="txtName">*</el:PropertyProxyValidator>

The repository or factory you build could validate a business object using the validation block, and if in error, store this error state in a flag within the object.  That way, any erroring objects are not persisted to the data store, and our new SaveChanges method (modified from above) is below:

Listing 10

public virtual int SaveChanges()
{
  int savedItems = 0;
 
  foreach (T item in this.Items)
  {
    if (item.IsDirty)
    {
      if (Validation.Validate < T > (item).IsValid)
      {
        this.SaveItem(item);
        item.ClearDirtyStatus();
      }
      else
        item.IsValid = false;
 
      savedItems++;
    }
  }
 
  return savedItems;
}
How Much Exposure?

When developing classes, the question often arises how much of the internal workings should be exposed.  For instance, when developing classes, how much of the class should be exposed?  One of the cases we'll talk about is repository-based classes, where the repository works like a list of domain objects.

One of the common ways to store domain objects is in a collection of some sort, where the objects can be added, removed, or inserted.  A list also has the capabilities to perform find/contain operations, searching for a specific item.  However, should the list be exposed directly?  If the list is exposed as a public read-only property, any object can add items to the list.  Most collection classes also don't expose events for when an object is added or removed, meaning that the source object doesn't know when these events may occur, which is one of the reasons I created a collection that raises events upon adding/removing items to/from the list in my Nucleo framework.

However, is that a bad thing?  Maybe not.  If your class doesn't need to keep track of state, new objects being added or existing objects being removed isn't that big of a deal.  However, there are times when it is a very big deal, especially when these objects represent data in a data store, and there should be some correlation between the two.

In addition, what about creating the object?  Should the object be directly instantiable?  Should it be created by a factory or another object?  In situations where you are tracking state of an object, I would highly recommend making the constructor internal, so that your code can control when the object is instantiated.  In situations where the class may be a helper object or utility, it doesn't really make a difference to me.  However, when you read about this subject on the web, there are a variety of topics.  My viewpoint is I make the constructor internal, especially if I feel I need an extra level of control over it, unless I don't care about what the object does.

For instance, imagine a Comment class that represents a comment generated from a web form.  The factory that would work with this object would look like the following:

Listing 11

public static class CommentFactory
{
  public static Comment CreateComment(string name, string email, string subject,
    string message, string source)
  {
    CommentsDataGateway gateway = new CommentsDataGateway();
    Guid id = gateway.AddComment(name, email, subject, message, source);
 
    return new Comment(id, name, email, subject, message, source);
  }
 
  private static Comment CreateCommentObject(DataRow row)
  {
    return new Comment((Guid)row["CommentID"], (string)row["Name"], (string)
      row["Email"], (string)row["Subject"], (string)row["Message"], !row.IsNull
      ("Source") ? (string)row["Source"]: string.Empty);
  }
 
  public static void DeleteCommentsPast(DateTime pastDate)
  {
    CommentsDataGateway gateway = new CommentsDataGateway();
    gateway.DeleteOldComments(pastDate);
  }
 
  public static CommentCollection GetComments(TimeSpan period)
  {
    CommentsDataGateway gateway = new CommentsDataGateway();
    DataTable commentsTable = gateway.GetComments(period);
    CommentCollection commentsList = new CommentCollection();
 
    foreach (DataRow commentRow in commentsTable.Rows)
      commentsList.Add(CreateCommentObject(commentRow));
 
    return commentsList;
  }
}

In the above factory, the factory instantiates the Comment object, as the comment's constructor is internal to the project.  It's solely responsible for creating the object, and meets a principle discussed in the next section.

Choosing the Right Pattern

There is much value in existing design patterns that other developers have created.  While there is value in them, they aren't always right in every situation, which is why each design pattern usually lists the situations in which they should be used.  It is up to you as the developer to determine whether it is right for the situation.  This is the biggest challenge for me because it's interpretive; you have to interpret whether your situation fits within this pattern.  With all of my study of design patterns, I still don't understand this concept.

What if you make the wrong choice?  I think that is the reason for a lot of hesitation on behalf of developers to use and study design patterns.  However, in my experience, most of what I know is because I did make the wrong choices in my design.  When asked about the failures with the light bulb, Thomas Edison responded "I have now eliminated 1000 ways it does not work and I get closer and closer to success."  However, design patterns can work the other way; we can "what if" ourselves to death wondering if this is the best design, and constantly seeking perfection or seeking to refactor because we want to try out a new design pattern we learned about.  A balance is needed.

There are other principles that can help with this.  Nilsson mentioned in his book, Domain-Driven Design, about the Single Responsibility Principle (SRP).  This principle is designed to simplify object-oriented development by creating each object to have only a single responsibility.  Using this principle, this means that a business object can represent the data provided from the data source, but it's the responsibility of another class to instantiate it (such as a factory).  You can see an example of this in the previous section.

I must admit, I often violate this principle.  For instance, I often think it as beneficial to store an internal static method that instantiates an object from the data that it is loaded from.  Sometimes, I have a class that serves two functions embedded in the same class, and although it makes the class much bigger by adding methods/properties, it is easier to manage this way.

Because of this, I'm of the opinion that if it really works and I'm consistent with my implementation, then I'm OK if it violates some patterns/principles.

How Much Configuration?

In application development, it is helpful to allow consumers of the code to be able to configure the class through the configuration file or some other means.  However, you have to determine the scope of your application.  Do you really need to provide configuration?  Do you really need a lot of configurable settings if the application is an intranet web site with a small user base?

The scope of the application really does matter, because any application can usually be easily expanded to include configuration.  Usually, a custom application not exposing an API to the users doesn’t need much configuration at all, a custom framework should have a lot of configurable settings, and everything else should be somewhere in between.

For instance, developing a custom provider requires configuration, as that is part of the setup you see with many of the built-in providers in the .NET 2.0 framework.  But data layer objects and most business objects don't need much configuration at all, except maybe to expose a connection string key for all of the data objects, and other similar situations.  For instance, for a class that uses security passwords, I created a separate configuration element just for it, shown with a few of the properties that I created.

Listing 12

public class PasswordsElement: ConfigurationElement
{
  [ConfigurationProperty("minimumLength", DefaultValue = 6)]
  public int MinimumLength
  {
    get
    {
      return (int)this["minimumLength"];
    }
    set
    {
      if (value <  - 1)
        throw new ArgumentOutOfRangeException("value");
      this["minimumLength"= value;
    }
  }
  [ConfigurationProperty("minimumNumericCharacters", DefaultValue = 1)]
  public int MinimumNumericCharacters
  {
    get
    {
      return (int)this["minimumNumericCharacters"];
    }
    set
    {
      if (value <  - 1)
        throw new ArgumentOutOfRangeException("value");
      this["minimumNumericCharacters"] = value;
    }
  }
 
  [ConfigurationProperty("minimumUpperCaseCharacters", DefaultValue = 1)]
  public int MinimumUpperCaseCharacters
  {
    get
    {
      return (int)this["minimumUpperCaseCharacters"];
    }
    set
    {
      if (value <  - 1)
        throw new ArgumentOutOfRangeException("value");
      this["minimumUpperCaseCharacters"= value;
    }
  }
}

This security password management class requires a special function, and therefore requires a specific way to configure it.  However, for most classes, whether they be a DAL object, a POCO object (Plain Old CLR Object), a custom collection, etc. this would be overkill.

There are some handy custom configuration sections that can aid in application development.  For instance, having a global application element with some basic features helps to centralize these settings throughout the application.  One such implementation is below:

Listing 13

public class ApplicationSection: ConfigurationSection
{
  [ConfigurationProperty("defaultConnectionStringName", DefaultValue = "")]
  public string DefaultConnectionStringName
  {
    get
    {
      return (string)this["defaultConnectionStringName"];
    }
    set
    {
      this["defaultConnectionStringName"] = value;
    }
  }
 
  public static ApplicationSection Instance
  {
    get
    {
      return ConfigurationManager.GetSection("nucleo/application")as
        ApplicationSection;
    }
  }
 
  [ConfigurationProperty("isTesting", DefaultValue = false)]
  public bool IsTesting
  {
    get
    {
      return (bool)this["isTesting"];
    }
    set
    {
      this["isTesting"= value;
    }
  }
}

This element, in this instance, provides a default connection string name, which can then be used in the data layer, in a custom provider, or any other database-connected object.  The IsTesting property is a property I use to control whether the application is currently in unit testing, which allows certain features I've built-in.

Conclusion

You've seen several concepts that can be used in business systems that are object-oriented.  Hopefully this article has shown you some various concepts or ideas that you could implement in your business systems.


Product Spotlight
Product Spotlight 

©Copyright 1998-2024 ASPAlliance.com  |  Page Processed at 2024-04-26 2:31:29 AM  AspAlliance Recent Articles RSS Feed
About ASPAlliance | Newsgroups | Advertise | Authors | Email Lists | Feedback | Link To Us | Privacy | Search