I recently saw this approach used in a complex Apex application built for my current client, and I liked what I saw – so I used a similar one in another project of mine, with good results.
- Pages load and process faster
- Less PL/SQL compilation at runtime
- Code is more maintainable and reusable
- Database object dependency analysis is much more reliable
- Apex application export files are smaller – faster to deploy
- Apex pages can be copied and adapted (e.g. for different interfaces) easier
How did all this happen? Nothing earth-shattering or terribly original. I made the following simple changes – and they only took about a week for a moderately complex 100-page application (that had been built haphazardly over the period of a few years):
- All PL/SQL Process actions moved to database packages
- Each page only has a single Before Header Process, which calls a procedure (e.g.
CTRL_PKG.p1_load;
) - Each page only has a single Processing Process, which calls a procedure (e.g.
CTRL_PKG.p1_process;
) - Computations are all removed, they are now done in the database package
The only changes I needed to make to the PL/SQL to make it work in a database package were that bind variable references (e.g. :P1_CUSTOMER_NAME
) needed to be changed to use the V()
(for strings and dates) or NV()
(for numbers) functions; and I had to convert the Conditions on the Processes into the equivalent logic in PL/SQL. Generally, I would retrieve the values of page items into a local variable before using it in a query.
My “p1_load” procedure typically looked something like this:
PROCEDURE p1_load IS BEGIN msg('p1_load'); member_load; msg('p1_load Finished'); END p1_load;
My “p1_process” procedure typically looked something like this:
PROCEDURE p1_process IS request VARCHAR2(100) := APEX_APPLICATION.g_request; BEGIN msg('p1_process ' || request); CASE request WHEN 'CREATE' THEN member_insert; WHEN 'SUBMIT' THEN member_update; WHEN 'DELETE' THEN member_delete; APEX_UTIL.clear_page_cache(APEX_APPLICATION.g_flow_step_id); WHEN 'COPY' THEN member_update; sv('P1_MEMBER_ID'); -- clear the member ID for a new record ELSE NULL; END CASE; msg('p1_process Finished'); END p1_process;
I left Validations and Branches in the application. I will come back to the Validations later – this is made easier in Apex 4.1 which provides an API for error messages.
It wasn’t until I went through this exercise that I realised what a great volume of PL/SQL logic I had in my application – and that PL/SQL was being dynamically compiled every time a page was loaded or processed. Moving it to the database meant that it was compiled once; it meant that I could more easily see duplicated code (and therefore modularise it so that the same routine would now be called from multiple pages). I found a number of places where the Apex application was forced to re-evaluate a condition multiple times (as it had been copied to multiple Processes on the page) – now, all those processes could be put together into one IF .. END IF
block.
Once all that code is compiled on the database, I can now make a change to a schema object (e.g. drop a column from a table, or modify a view definition) and see immediately what impact it will have across the application. No more time bombs waiting to go off in the middle of a customer demo. I can also query ALL_DEPENDENCIES
to see where an object is being used.
I then wanted to make a Mobile version of a set of seven pages. This was made much easier now – all I had to do was copy the pages, set their interface to Mobile, and then on the database, call the same procedures. Note that when you do a page copy, that Apex automatically updates all references to use the new page ID – e.g. if you copy Page 1 to Page 2, a Process that calls “CTRL_PKG.p1_load;
” will be changed to call “CTRL_PKG.p2_load;
” in the new page. This required no further work since my p1_load
and p1_process
procedures merely had a one-line call to another procedure, which used the APEX_APPLICATION.g_flow_step_id
global to determine the page number when using page items. For example:
PROCEDURE member_load IS p VARCHAR2(10) := 'P' || APEX_APPLICATION.g_flow_step_id; member members%ROWTYPE; BEGIN msg('member_load ' || p); member.member_id := nv(p || '_MEMBER_ID'); msg('member_id=' || member.member_id); IF member.member_id IS NOT NULL THEN SELECT * INTO member_page_load.member FROM members m WHERE m.member_id = member_load.member.member_id; sv(p || '_GIVEN_NAME', member.given_name); sv(p || '_SURNAME', member.surname); sv(p || '_SEX', member.sex); sv(p || '_ADDRESS_LINE', member.address_line); sv(p || '_STATE', member.state); sv(p || '_SUBURB', member.suburb); sv(p || '_POSTCODE', member.postcode); sv(p || '_HOME_PHONE', member.home_phone); sv(p || '_MOBILE_PHONE', member.mobile_phone); sv(p || '_EMAIL_ADDRESS', member.email_address); sv(p || '_VERSION_ID', member.version_id); END IF; msg('member_load Finished'); END member_load;
Aside: Note here the use of SELECT * INTO [rowtype-variable]
. This is IMO the one exception to the “never SELECT *
” rule of thumb. The compromise here is that the procedure will query the entire record every time, even if it doesn’t use some of the columns; however, this pattern makes the code leaner and more easily understood; also, I usually need almost all the columns anyway.
In my database package, I included the following helper functions at the top, and used them throughout the package:
DATE_FORMAT CONSTANT VARCHAR2(30) := 'DD-Mon-YYYY'; PROCEDURE msg (i_msg IN VARCHAR2) IS BEGIN APEX_DEBUG_MESSAGE.LOG_MESSAGE ($$PLSQL_UNIT || ': ' || i_msg); END msg; -- get date value FUNCTION dv (i_name IN VARCHAR2 ,i_fmt IN VARCHAR2 := DATE_FORMAT ) RETURN DATE IS BEGIN RETURN TO_DATE(v(i_name), i_fmt); END dv; -- set value PROCEDURE sv (i_name IN VARCHAR2 ,i_value IN VARCHAR2 := NULL ) IS BEGIN APEX_UTIL.set_session_state(i_name, i_value); END sv; -- set date PROCEDURE sd (i_name IN VARCHAR2 ,i_value IN DATE := NULL ,i_fmt IN VARCHAR2 := DATE_FORMAT ) IS BEGIN APEX_UTIL.set_session_state(i_name, TO_CHAR(i_value, i_fmt)); END sd; PROCEDURE success (i_msg IN VARCHAR2) IS BEGIN msg('success: ' || i_msg); IF apex_application.g_print_success_message IS NOT NULL THEN apex_application.g_print_success_message := := apex_application.g_print_success_message || '<br>'; END IF; apex_application.g_print_success_message := apex_application.g_print_success_message || i_msg; END success;
Another change I made was to move most of the logic embedded in report queries into views on the database. This led to more efficiencies as logic used in a few pages here and there could now be consolidated in a single view.
The challenges remaining were record view/edit pages generated by the Apex wizard – these used DML processes to load and insert/update/delete records. In most cases these were on simple pages with no other processing added; so I left them alone for now.
On a particularly complex page, I removed the DML processes and replaced them with my own package procedure which did the query, insert, update and delete. This greatly simplified things because I now had better control over exactly how these operations are done. The only downside to this approach is that I lose the built-in Apex lost update protection mechanism, which detects changes to a record done by multiple concurrent sessions. I had to ensure I built that logic into my package myself – I did this with a simple VERSION_ID column on the table (c.f. Version Compare in “Avoiding Lost Updates”).
The only downsides with this approach I’ve noted so far are:
- a little extra work when initially creating a page
- page item references are now strings (e.g. “
v('P1_RECORD_ID')
“) instead of bind variables – so a typo here and there can result in somewhat harder-to-find bugs
However, my application is now faster, more efficient, and on the whole easier to debug and maintain – so the benefits seem to outweigh the downsides.
Filed under: APEX Tagged: APEX, best-practice
