Avoid client-side SQL (Data Sentinel Layer)

Link to this posting

Postby Ursego » 22 Feb 2013, 09:57

To access/manipulate data, client applications should call database stored procedures rather than use SQL. Read Principle of Least Knowledge for more details about the concept.

Caution! The following tip can be irrelevant for very large systems with heavy transactions-intensive processing!

Client parts of many client/server applications (and middle tier components of many n-tier applications) contain SQL statements in one of the existing forms (imbedded into a host language, data sources of data access objects etc.), and that is looking normal and traditional. But that should not make us happy. It's better to have a "SQL-free" Business Logic Layer (BLL) - and even Data Access Layer (DAL) if appears. If you begin to build a new database-oriented application (client/server or distributed) and wish to really facilitate your life, think about creation of Data Sentinel Layer (that's how I call it - it's not an official term ;) ) which is the way of the interaction between the application and the data stored in the database. In fact, it is a set of stored procedures which act as API between the client (or middle layer application's DAL), written in .NET, Java, PowerBuilder etc., and the data storage (physical database tables). Only these procs have permissions to access and manipulate the stored data directly; DAL/client app is only allowed to call the procs (which are the main "working horses" of such a system). So, SQL statements may not appear in the client/middle application at all - not even in it's DAL! Yes, yes, I am serious - a large scale, enterprise level data-centric application can perfectly exist with no SQL out of the database!

So, if an object of ADO.NET's DbCommand class is used in the DAL then set its CommandType property to StoredProcedure and populate its SelectCommand, InsertCommand, UpdateCommand and DeleteCommand properties with the names of the stored procedure used for SELECT, INSERT, UPDATE and DELETE (but not with hardcoded SQL as it happens when the CommandType is set to Text).

Did you pay attention how development tools were replacing each other during the last two-three decades? COBOL, C, C++, PowerBuilder, Java, today is the golden age of .NET, some people speak about HTML5... And nobody knows what the industry will suggest us tomorrow! So, if you use the described approach and program business logic in stored procs (and not in the BLL of your current-technology-dependent application), there will be less problems with the conversion of the system to modern technologies in the future - I am sure that Oracle and SQL Server (with their powerful languages PL/SQL and Transact-SQL) will continue to exist even after End of Days!

So, go ahead!

1. (Oracle version): all the procs, serving one table, should be organized in one package. For example, it can be named pckOrder for Order table. Stored procedures should be named consistently in all packages, for example: Sel, Ins, Upd and Del. Of course, the package can also contain other procs - not related to one of 4 the basic actions with the table, served by the package - for example, SelOpen (to retrieve orders in status 'OPEN'), GetLastOrderDateForCustomer etc. It's important to adhere to the following naming convention:
### "Sel" means returning a recordset with help of REF_CURSOR (which mimics returning data by SQL SELECT)
### "Get" means returning scalar value(s) via RETURN statement or OUT parameter(s) (as a classic Get... method).

1. (SQL Server version): all the procs, serving one table, should begin with the same entity name or prefix. For example, for Order table it can be something like: OrderSel, OrderIns, OrderUpd and OrderDel (for main DB actions) and OrderSelOpen, OrderGetLastOrderDateForCustomer etc. (for additional DB actions). It's important to adhere to the following naming convention:
### "Sel" means returning a recordset with help of SELECT statement in the end of the stored procedure.
### "Get" means returning scalar value(s) via OUTPUT parameter(s).

2. Don't use database triggers at all and don't write business logic, automatically executed when records are inserted/updated/deleted (like validation checks and updates of other tables), in Business Logic Layer. All automatic actions, which you normally would program in triggers and BLL, should be written in stored procs which are used to insert, update and delete records. It gives us a huge advantage: all the automatic stuff, related the table, appears IN ONE PLACE. Looking at a proc (which, for example, inserts a record), you know that you see THE WHOLE PICTURE OF THE DATABASE OPERATION and not only its part, and you are not worried that something will be written twice or other code will override results of your change.

3. All stored procs used for SELECT, INSERT, UPDATE and DELETE must return the number of affected records (via RETURN statement or OUT/OUTPUT numeric parameter) so the calling application can check that number without an additional trip to the DB.
User avatar
Site Admin
Posts: 120
Joined: 19 Feb 2013, 20:33

Link to this posting

Postby Ursego » 19 Feb 2018, 14:29

From the O'Reilly book "Oracle PL SQL Programming":
Rather than have developers write their own SQL statements, you should provide an interface to
those SQL statements. This is one of the most important motivations for building packages, yet is
only rarely employed by developers.

With this approach, PL/SQL developers as a rule will not write SQL in their applications. Instead,
they will call predefined, tested, and optimized code that does all the work for them; for example,
an "add" procedure (overloaded to support records) that issues the INSERT statement and follows
standard error-handling rules; a function to retrieve a single row for a primary key; and a variety
of cursors that handle the common requests against the data structure (which could be a single
table or a "business entity" consisting of multiple tables).

If you take this approach, developers will not necessarily need to understand how to join three or
six different highly normalized tables to get the right set of data. They can just pick a cursor and
leave the data analysis to someone else. They will not have to figure out what to do when they try
to insert and the row already exists. The procedure has this logic inside it (and can be easily
upgraded to use the Oracle9i MERGE statement).

Perhaps the biggest advantage of this approach is that as your data structures change, the
maintenance headaches of updating application code are both minimized and centralized. The
person who is expert at working with that table or object type makes the necessary changes within
that single package, and the changes are then "rolled out" more or less automatically to all
programs relying on that package.
User avatar
Site Admin
Posts: 120
Joined: 19 Feb 2013, 20:33

IF you want to ((lose weight) OR (have unbelievable (brain function AND mental clarity))) THEN click:

free counters

eXTReMe Tracker