PHP

28 Oct 2016

PDO_DataObject Released

Coding was complete last month, and has a huge test suite to covering a large proportion of the features. This should mean that replacing DB_DataObjects will be pretty easy.
You can either just checkout the code from github  / PDO_DataObject , or use the pear channel 
#pear channel-discover roojs.github.com/pear-channel
#pear install roojs/PDO_DataObject-0.0.1 

Documentation

I revived my old PHP_CodeDoc code  (That needs publishing). It seemed simpler than trying to use any of the other tools out there. It's a pretty simple tool to extract structure, and documentation comments from the PHP source code. I added a small amount of code to export to our 'Roo UI bjs toolkit format' 
The generated files are pure JSON, and mostly contain the contents from the comments un-formatted. I decided that doing the Markdown conversion in JavaScript was far simpler (I refactored https://github.com/chjj/marked slightly for use with our libraries)
There are a few other tweaks I made, using `@category` to group the documentation, and writing category pages (using roobuilder), then putting it all together the index.js file loads the parts, and renders the manual.
This week I finished tidying up the rendering on mobile, and making sure all the comments render nicely using markdown. The result should be a nice easy to read and use manual.

Posted by in PHP | Add / View Comments()

17 Aug 2016

PDO_DataObject is under way

Work has started on revamping my PEAR package DB_DataObject, While it's served well over the years, and I still use it every day.. We have been funded to create a new version, which runs on PDO.

There is a Migration plan in the github repo for PDO_DataObject, I have currently completed the first two blocks, and almost the third block. But the key features are
  • General Compatibility to DB_DataObject with a few exceptions -  methods relating to PEAR::DB have been removed, and replaced with PDO calls
  • New simpler configuration methods, with some error checking
  • A complete test suite - which we will apply to DB_DataObject to ensure compatibility
  • Chaining for most methods so this works
$data = PDO_DataObject::Factory('mytable') ->autoJoin() ->where("somevalue not like 'fred%'") ->limit(100) ->fetchAll();
  • Exceptions by default (PEAR is an optional dependency - not required)
  • It should be FAST!!! - standard operations should require ZERO other classes - so no loading up a complex set of support classes.  (odd or exotic features will be moved to secondary classes)
Feel free to watch the repo (we are using auto commit, so the commits are pretty meaningless at present)

Posted by in PHP | Add / View Comments()

15 Dec 2011

Deleting the View and Controller..

This is NOT a post for people who do not use MVC, Please delete your code, and write it properly.. Anyway, as anybody who has used or written a reasonable framework in PHP knows, MVC is pretty much the golden rule for implementation. There are a dozen frameworks out their based around the principles, with different levels of complexity.

My own framework was designed around those principles, and for many years worked perfectly for those classic display a crap load of HTML pages using information from a database. The Model (DB_DataObject's), View (HTML_Template_Flexy) and Controller (classes that extend HTML_FlexyFramework_Page) delivered pages. Designing sites basically involved gluing all these pieces together. As the sites grew over time, shared code usually ended up in the Models, and each page had a controller which might render the share templates. All was well, and code was reasonably easy to maintain and extend.

Now however almost all the projects I've worked on in the last few years use the Roo Javascript library (the ExtJS fork), and are built ontop of the Pman components (originally a project management tool, that grew into a whole kit of parts).  One of the key changes in the way the code is written, is how little code is now done to get the information from the database to the end user.

Obviously the whole HTML templating is been thrown out the window, (other than the first primary HTML page), the whole user interface is built with Javascript, and generated by User interface builder tools. The interaction of the interface is handled by signals (listeners) on the Roo Javascript components. These in turn call the Back end (PHP code) and almost always retrieve JSON encoded data, and that is rendered using the UI toolkit.

When I first started moving to this development model, I tended to retain the previous idea of having multiple controllers to handle the Select/Create/Update/Delete  actions, as time went rather than have multiple controllers for each of those actions, I would use a single controller to manage a single Model entity (like Product). POST would always update/modify the model, and GET would always just view and query the data.

Eventually I realized that since all these controllers where essentially doing the same thing, a single generic controller should be able to do everything that all these single controllers where doing. And so was born the Pman_Roo class.

So Basically {index.php}/Roo/{TableName} provides generic database access, for the whole application. Most code development on the PHP side is now contained within the DataObject Models. This greatly enhances code reuse as similar code ends up closer together, Unlike before where shared code was moved from the controllers to the model when necessary, now most of the code starts off in the model. This speed project development up considerably not to mention the huge savings of not having to try and manipulate data into HTML.

How does it work.


A GET or POST request is recieved by the server either from Roo's Form/Grid/Tree or directly by Pman.Request(), a handy wrapper arround Roo.Ajax, that handles error messages nicely.

The  request {index.php}/Roo/{TableName} checks that the tablename is valid, then goes on to do the following actions depending on the params supplied. The documentation in the Pman_Roo class is the most up-to-date documentation. and details what calls are made (if available) on the relivant dataobject.

Using the class, it is now possible to handle pretty much any Database related query without implement any controller, and easily managing data permissions.

Snapshot of current documentation is in the extended view.. (latest will be in the source)
Posted by in PHP | Add / View Comments()

02 Sep 2011

Watch-out PHP 5.3.7+ is about.. and the is_a() / __autoload() mess.

Well, for the first time in a very long while I had to post to the PHP core developers list last week, unfortunately the result of which was not particulary usefull.

The key issue was that 5.3.7 accidentally broke is_a() for a reasonably large number of users. Unfortunately the fixup release 5.3.8 did not address this 'mistake', and after a rather fruitless exchange I gave up trying to persuade the group (most people on mailing list), that reverting the change was rather critical (at least pierre supported reverting it in the 5.3.* series).

Anyway, what's this all about, basically if you upgrade to any of these versions and

a) use __autoload() 
or
b) any of your code calls is_a() on a string, 

you will  very likely get strange failures..


The change in detail.


in all versions of PHP since 4.2 the is_a signature looked like this

bool is_a ( object $object , string $class_name )

As a knock on effect from fixing a bug with is_subclass_of, somebody thought it was a good idea to make the two functions signature consistant, so in 5.3.7+ the signature is now

bool is_a ( mixed $object_or_string , string $class_name )

And to make matters worse, that change to the first 'object_or_string', will also call the autoloader if the class is not found.


How is_a() has been used in the past.


On the face of this, it would not seem like a significant change, however, you have to understand the history of is_a(), and why it was introduced. In the early days of PEAR (before PHP 4.2) there was a method called PEAR::isError($mixed), which contained quite a few tests to check if the $mixed was an object, and was an instance of 'PEAR_Error'. A while after PHP 4.2 was released, this was changed to use this new wonderfull feature, and basically became return is_a($mixed, 'PEAR_Error').

Since PEAR existed before exceptions (and is still a reasonable pattern to handle errors), It became quite common practice to have returns from methods which looked like this.

@return {String|PEAR_Error} $mixed  return some data..

So the callee would check the return using PEAR::isError(), or quite often just is_a($ret,'PEAR_Error'), if you knew that the PEAR class might not have been loaded. 

So now comes the change and let's see what happens.

The __autoload() issue.


Personally I never use __autoload, it's the new magic_quotes for me, making code unpredicatable and difficult to follow (read the post about require_once is part of your documentation). But anyway, each to their own, and for the PEAR packages I support I will usually commit any reasonable change that helps out people who are using autoload.

So there are users out there using autoload with my PEAR packages, as I quickly found last week. Quite a few of these packages use the is_a() pattern, and the users who had implemented __autoload() had very smartly decided that calling autoload with a name of a class that could not or did not exist was a serious error condition, and they either died, or threw exceptions.

Unfortunatly, since is_a() was sending all of the string data it got straight into __autoload(), this happened rather a lot. Leading to a run around hunt for all calls to is_a(), and code changes being put it to ensure that it never puts a string in the first argument.


The is_a(string) issue


While I'm not likely to see the autoload issue on my code, I'm not sure I really appreciate having to fix it so quickly without a timetable to change it. The other change that may cause random, undetectable bugs is the accepting a string.

imagine this bit of code

function nextTokString() { 
    if (!is_string($this->tok[$this->pos])) {
return PEAR::raiseError('....')
    }
    return $this->tok[$this->pos++];
}

... some code..
$tok =$this->nextTokString()
if (is_a($tok,'PEAR_Error')) {
    return $tok;
}
... do stuff with string.

Now what happens if the token is 'PEAR_Error', is_a() will now return true. The big issue with this is that unless you know about the is_a() change, this bug is going to be next to impossible to find.. No warning is issued, is_a() just silently returns true, where before it just returned false.

I was hoping that PHP 5.3.9 would go out the door with this reverted, or at least a warning stuck on string usage of is_a(), but nope, none of my efforts of persuasion appear to have worked.

While I do not think the change is particularly necessary (as the use case for the new signature is very rare, and acheivable in other ways), I think reverting this change before PHP 5.3.7+ went into major deployment is rather critical.  (yes it can take months before PHP releases start commonly arriving on servers). Then if it's deemed a necessary change (by vote) then go for it in 5.4... and add  a warning in the next version in the 5.3 series..

  

Anyway the fixes / workaround:


The simplest fix is to prepend tests with is_object

eg. 

if (is_a($tok,'PEAR_Error')) {

becomes

if (is_object($tok) && is_a($tok,'PEAR_Error')) {

if ( $tok instanceof PEAR_Error)) {


While you could start looking at the code and determining if you really need to prefix it with is_object(), the reality is unfortuntaly it may be simpler to stick this extra code in, just in case you start delivering strings where objects where expected.

Update


This has been fixed in 5.3.9, however part of this derives from some confusion over instanceof

When PHP5 was released and added instanceof, doing this when the class did not exist caused a fatal error.

if ( $tok instanceof Unknown_Class ) {

However, this was changed in 5.1 to not cause a fatal error, The documentation is not totally clear on this, especially for anyone who used PHP 5.0.*. 

Unfortunately, since migration times are slow, supporting 5.0-5.1 is a reality of life for anyone writing libraries (actually Most of the libraries I write for still provide support for PHP4). So using any 'new' feature of the language basically prevents you from supporting older version of PHP with new code.

In this case, PHP5 usage has slipped below 0.3% so removing support for this should be fine. 


Posted by in PHP | Add / View Comments()

10 Apr 2011

How to spam in PHP..

Well, after having written a huge anti-spam system, it now time to solve the reverse problem, sending out huge amounts of email. Only kidding, but the idea of scaling email sending using PHP is quite interesting.

The reason this has been relivant in the last two weeks is two fold, first off, my slow and sometimes painfull rewrite of mtrack has got to the point of looking at email distribution. Along with this I have  a project that needs to distribute press releases, and track responses. Since both projects now use the same underlying component framework (Pman.Core and Pman.Base). It seemed like an ideal time to write some generic code that can solve both issues.

Classic mailing, you press, we send...

I've forgotton how many times I've written code that sends out email, pretty much all of it tends to be of the varient, that the user of the web applicaiton presses a button, then the backend code generates one or many emails, and sends it out. Most frequently using SMTP to the localhost mailserver.

In most cases this works fine. You might run into trouble if your local mailserver is down or busy, but for the most part it's a very reliable way to send out less than 10 emails in one go.

Queues and bulk sending

One of my associates makes a tiny amount of money by offering the service of sending out newsletters and news about bar's and restaurants, to do this he bought a commercial PHP package, which I occasionally have the annoying task of debugging and maintaining. What is interesting about this package are the methods it uses to send out email. Basically once you have prepared a mailout, and selected who it goes to, it creates records in a table that goes something like this:
User X  | Mailout Y
123     | 34
124     | 34
...
There are two methods to then send these mailouts, first is via the web interface, that uses a bit of ajax refresh to keep loading the page and send out a number of emails in on go (eg. 10 at a time). or there is the cron version that periodically runs and tries to send out all the mails in that table.

This method always sends to the localhost mailserver, and let's that sort out the bounces, queuing, retry etc. It has a tendancy to be very slow , and use up a huge amount of memory if sending out huge volumes of email. Most of it get's stuck in the mailserver queue, and the spammer has no real idea if the end users might have recieved it. If the mailserver get's stuck or blocked, the messages can often sit in the queue until they expire 2 days later, by which time the event at the bar may have already occurred.

The MTrack way

I'm not sure if I mentioned before, but I was intreged by the method used by mtrack when I first saw it. For those unaware of what mtrack is, it's a issue tracker based on trac. One of it's jobs is to send out emails to anyone 'watching'  a project/ bug etc. 

From my understanding of what mtrack was doing (the original code has long been forgotten and removed now). Is that it set up a 'watch' list, eg. Brian is watching Project X, and Fred is watching Issue 12. 

When Issue 12 changed, or someone committed something to Project X, no actual email was sent at that moment. This obviously removed a failure point on the commit or bug update, and if you had 100's of people watching an issue (like launchpad) for example, this would prevent the server hanging while it was busy sending all the emails.

The unfortunate downside was that to make the notifications work a cron job was required, this cron job had to hunt and find all the changes that had occurend and cross reference that with all the people who may have been watching those issues. The code for which was mindblowingly complex, and i suspect was a port of the original trac code.

As somebody who basically looks at really complex conditional code and wonders 'is that really the best way to do this', I really had to come up with an alternative.

Bulk mailing done right....

So to solve my issues with mtrack and the other project, I devised a system that was both simple and fast at the same time. Here's the lowdown.

First off, for both the Mtrack and mailout system, they both generate the distribution list when the web application user pushes the button. So for Mtrack, somebody updates the ticket (adding a comment for example). And the controller for the ticket page basically does a few things

a) If you have modified the ticket owner (or developer) make sure they are on the 'watch list' or subscribers. 
b) ask the watch list (the dataobject code for core_watch) to generate a list of people to notify (in our core_notify table), and make sure we do not send an email to the person filling in the form (as he knows he just did that and does not need to be reminded..)

For the other mailout system, It also just generates elements in the core_notify table, actually since the database table for the distribution targets different in that application, we actually have a seperatea table called XXXX_notify, and using the joy's of DB_DataObject and Object orientation, that class just extends the core_notify table, from what I rembember the only bit of code in that class is var $__table = 'XXXX_notify', since the links.ini handles the reference table data.

And now for the really cool part, sending the mails out. Obviously this is done via cron jobs (so as not to distrupt the user interface). The backend consists of two parts (pretty much how a mailserver works.). The first is the queue runner. This basically runs through the notify table, and makes a big list of it's of what to send out. This uses the ensureSingle() feature of HTML_FlexyFramework, to ensure only one instance of the queue can be running at once.

Then rather than sequentially sending each email, it basically proc_open's a new PHP process to send each email. This enables the queue to send concurrently many emails, rather than relying on a single pipleline. The code monitors these sub processes, and ensure that only a fixed number are running at the same time. We do not want to look to much like a spammer to our ISP..

The code that sends out the single email can then use MX resolution, and send direct to the real email server, and log results (success, failure or try later.)

Now to actually test all this....

Posted by in PHP | Add / View Comments()

22 Jan 2011

mtrack and work flowing commits into a live site

Working with outsourced resources can be frustrating at times, In the past I've used various techniques for this, normally it involved outsourcing a large bulk project at a fixed cost, setting down a few rules about code quality, design etc. and letting them get on with it. This works reasonably well, as the mess that get's created is controllable if they have followed the rules.

More recently I've been working with outsourced contractors who work on an hourly basis. The results have been mixed, and as we do it more frequently we are beginning to refine our working process.

Last Thursday however the client for one of the outsourced projects called frantically wanting the live site refreshed to show the development changes. Luckly we had decided to go with revision control only access to the site some while back (and as one of my previous posts mentioned the mess before that, and our problems with git, we concluded the contractors where not capable of using it, so we had ended up with a subversion frontend committing into a git backend)

To make the site go live is just a matter of running git pull in the live directory (and ocassionally git reset --hard to clear out any crap.) However after urgently updating the site to the current development state, a horrible problem was noticed on the front page. And I was tasked to try and fix it really quickly. 

To my horror though, as I had taken a hands off approach on this project (due to budgeting requirements), the code was in a far worse state than I feared. A few weeks ago we had started trying to force the contractors to follow some basic good coding practices, like commit messages with meaningfull descriptions, do not use the development server as a test server and always commit unix line breaks etc. This was all done via commit hooks on subversion, and commits where being rejected frequently (much to my evil amusement).

However this was not enough to prevent code that had been created many months before getting worse with age (bad decisions, with no sensible review process being made worse by more feature requests.). The result was that what should have been a simple one line fix to change the formating of a currency output, ended up with trying to understand some 400+ lines of jiberish. 

At this point, we concluded enough is enough. the cost savings of not reviewing this code previously was going to cost more and more in the future unless this mess was stopped. Hence mtrack came into the picture..

For those un-aware, mtrack is Wez Ferlong's project to replace trak, with a PHP based implementation. It's relatively new, and looks like it was developed for a need that Wez had internally.

My idea was that we would continue to allow the developers to commit into the repository, the only difference would be that they would have to add ticket numbers to the commit messages, and we would have a simple review process for the code using mtrack, so that we would only close an issue when it had been fixed to a reasonable level of code quality (and worked properly)

Setting up mtrack is extremely simple. the introduction instructions will get you started, however as I found doing anything more complex that what is provided requires modifying significant chunks of the code.

The overall sense I get from mtrack is that it has the potential to be an outstanding project. It could rival things like bugzilla, however, it could seriously do with some tidying up, and rethinking of some parts.

Anyway read the extended article for more detail thoughts as I started to implement our desired workflow with mtrack. It may help if you had diving into mtrack as well.



Posted by in PHP | Add / View Comments()

18 Jan 2011

Easy way to make Word documents with images from PHP

 It's yet another "solve this quickly and cheaply" problems....

How to produce nice Word documents with images and all, without ending up doing lots of hard coded PHP calling some obtuse library in a way that would be difficult to maintain.

This one took a bit longer than expected, my first idea was to generate HTML files, then run them through abiword's command line, as that was the hints I got from googling.

abiword --to=doc  myfile.html

Unfortunately, although looking at abiwords source code, it should have  worked, nothing actually came out. After trying a few other magic tools like unoconv I was getting close to admitting defeat.

However, after a nice walk away from the desk and putting the thinking cap on, it came to me.. abw is abiwords xml file format. It's about as simple as HTML. So why not generate a abw file, and convert that into word.

Images in abw files are just base64 at the end of the file, each given a specific name, and reference in the body of the document where you want to use them. Trivial to generate... and even easier if you cheat a little by not using an dom based xml generator for the content (good old Flexy templates). then just run them through abiword as planned.

abiword --to=doc  myfile.abw

Quick, easy and pretty Word documents generated from web pages..
Posted by in PHP | Add / View Comments()

28 Dec 2010

Looking at opencart


There's been quite a demand recently for online shops, While I do have a custom online shop, It's currently quite specific to a client, and does not have a generic frontend. So any kind of re-purposing involves creating a new design. Obviously this looks like a good long term plan, along with open sourcing it properly.

However in the meantime, I discovered opencart a few weeks ago, and for a quick and dirty shop delivery it's not to bad.

On the positive side, installation and basic setup of products is very simple, we checked the code out into git-svn and ran through the installation process, commiting changes, then used the git-ftp code to upload it to the target server.

This gave a managable installation, with full revision control, and after adding the clients logo to the site, setting up things like paypal and shipping etc. we let them loose adding their products.

All of this was quite smooth, but then came the real fun. "But can it do this..." the classic second sentence after you get started..

Their first request was to add extra fields on the product description.

In a perfectly designed system (eg. the shopping system I had already designed), this would be a matter of.

  1. add some extra fields to the database (eg. lead_time VARCHAR(32) ) - eg. in a local_mods.sql file.
  2. add some extra HTML  to the two templates or using app.Builder.  (in the 'overridden templates' folder) - which Template_Flexy supports
This would be feasible as the model layer would not need to understand much about random extra columns in the database that did not really mean to much to the application, but are always essentail  for the client. This is all possible as FlexyFramework/DataObjects/Template_Flexy combo handles extra columns without any modification to controllers or models. No more than 10 minutes work..

To follow the same approach with opencart however would have involved
  1. modify the database
  2. modify the model code for the product (checking each method to make sure they added any relevant hand coded SQL)
  3. modify the admin model code for the product (checking each method to make sure they added any relevant hand coded SQL)
  4. checking the controller class so that it added extra variables to the view layer, the controller layer for just viewing a product is over 400 lines, in what should be done in around 50. 
  5. checking the controller class for the administration side to see if it correctly send the data without understanding the model had changed (which actually it did)
  6. adding some HTML (and lightly sprinkled PHP) in the template for the frontend, using rather verbose, non-templated language, and ensuring it's all correctly escaped.
  7. adding some HTML (and lightly sprinkled PHP) in the template for the backend, using rather verbose, non-templated language, and ensuring it's all correctly escaped.

In the end I concluded that even making all these changes would result in a forked codebase that would become increasingly more complex to maintain, as the client demanded more changes, and the opencart released more versions.

What I ended up doing was using their category mechanism, and creating categories for different lead times, then modifying the front end controller to read the hidden extra categories, and creating a forked template for the frontend.

Even this is still a problematic fork, as SQL needed modifying in model, and extra code was added in the controllers

Having said all that, the experience was far smoother than the mornings job of creating out a development version of a Joomla site, that suffers from the horrific problem that the codebase can be modified by the Web frontend making revision control an absolute nightmare. It's a even better lesson in poor design....
Posted by in PHP | Add / View Comments()

23 Dec 2010

And now for some Christmas entertainment, git, outsourcing and PHP error messages.

Well, you either have to laugh or cry at this.

One of the projects I help out on, mostly by looking after the production server, uses Joomla, the owner has employed a number of outsourced resources via odesk to do the development, in the belief that it's cheaper that way. 

After a few months of them modifying the code and occasionally breaking the site, I changed the configuration so they had to commit via git, which in turn would copy the commited code onto the live server. So there was at least some tracking of changes they where making. 

Before this, they had been in the habit of modifying a file like index.php and calling it index-2010-04-12.php then uploading via ftp, and there was about half a dozen versions of each file on the server, (basically quite a mess)

So since git had been so mindblowingly usefull for all my projects, I made the assumption that it would be effective as a revision control system on the server.

Unfortunatly over the last few months that decision has come back to bite us. 

a) pretty much all the contractors they employed used windows (usually a sign of an inexperienced developer)
b) git on windows is no where near as mature (cygwin being slightly better that the msysgit version), especially when handling http-backend based repositories.
c) the contractors ran into all sorts of problems dealing with the command line and messages like 'error: Untracked working tree file 'images/M_images/Thumbs.db' would be overwritten by merge.  Aborting' - requiring either teamviewer sessions to walk them through the fixes or rather fruitless email conversations explaining what they should do.
d) the contractors had no idea how to use command line mysql (I even saw this in one of their logs - #git mysqldump -h... )
e) they would send me screenshots of the cygwin terminal???? copy and paste anyone...

So in the end I've given up on letting them use git directly, and installed subversion with a commit hook that copies the change into the git central store.

But that was the least of the problems I've seen. When the client started looking at his billing he was beginning to wonder what he was paying for. In one bill some 8 hours had been billed, yet there had been little to no change on the site. odesk provides screenshots of what the contractor was doing so I thought I would  have a look.

It appears the problem lay with their lack of understanding of PHP (the language they claim to know to be able to do these jobs). Our server is running PHP 5.3, which produces a few new warning messages, causing problems with older code. I've insisted that we do not downgrade, as the exim exploit illustrated, depending on the distribution provided packages for something like this can be the difference between a instant upgrade and messing around with build issues while a server is wide open for attack.

Anyway the problem they ran into was

Deprecated: Assigning the return value of new by reference is deprecated

this was in the nusoap library, and caused by this syntax.

 $this->wsdl =& new wsdl(.......

The fix is very trivial, just remove the & before the new (it's not needed in PHP5). however, rather than doing this, first they tried googling it, and apparently after not understanding the error message, they decided to comment out the whole body of the loadWSDL()  method, then went on to find all the calls to the method, and comment them out.

I guess it might be worth changing that error message to  "There is an '&' before 'new' on line XXX, it is not needed, so you should remove it."

Then again, as I keep reminding the client, you get what you pay for....

Posted by in PHP | Add / View Comments()

01 Sep 2010

Big step forward in Modular Database Applications with DataObjects

Being a Software Developer is all about developing applications faster, and delivering quicker. At the same time ensuring that quality is not lost and readibility is kept. DB_DataObjects is one of the key tools in my productivity toolkit. It was originally designed as a way of ensuring that shared database related code ends up in the right place, In the Model layer of the application, rather than the View or Controller (or worse mixed in to some hybrid PHP HTML garbage..), along with doing cool stuff like Query building etc.

Over the years, of using DataObjects, I've built up quite a library of reusable DataObjects, which to some degree can be plugged in to any project. A Person object that handles login/authentication and authorization (working with the Permissions and Group objects). A Image object that handles storage of Images, and Files that can provide File type conversion for rendering (using things like unoconv, etc.)

More recently, I've been using the Roo Javascript library as the UI component, the Pman Components ontop of the very lightweight HTML_FlexyFramework. The result is a very modular set of Application development parts. That can quickly be thrown together to build applications. Here's how they all fit together and how it just got a whole lot more modular and flexible...


Posted by in PHP | Add / View Comments()
   (Page 1 of 7, totalling 63 entries)    next page »