Showing posts with label development. Show all posts
Showing posts with label development. Show all posts

Wednesday, October 03, 2018

Promises and Lightning Components

In 2015, the ECMA specification included the introduction of Promises, and finally (pun intended) the Javascript world had a way of escaping from callback hell and moving towards a much richer syntax for asynchronous processes.

So, what are promises?


In short, it’s a syntax that allows you to specify callbacks that should execute when a function either ’succeeds’ or ‘fails’ (is resolved, or rejected, in Promise terminology).

For many, they're a way of implementing callbacks in a way that makes a little more sense syntactically, but for others it's a new way of looking at how asynchronous code can be structured that reduces the dependancies between them and provides you with some pretty clever mechanisms.

However, this article isn’t about what promises are, but rather:

How can Promises be used in Lightning Components, and why you would want to?


As with any new feature of Javascript, make sure you double check the browser compatibility to make sure it covers your target brower before implementing anything.

If you want some in depth info on what they are, the best introduction I’ve found is this article on developers.google.com

In addition, Salesforce have provided some very limited documentation on how to use them in Lightning, here.

Whilst the documentations's inclusion can give us hope (Salesforce knows what Promises are and expect them to be used), the documentation itself is pretty slim and doesn’t really go into any depth on when you would use them.

When to use Promises


Promises are the prime candidate for use when executing anything that is asynchronous, and there’s an argument to say that any asynchronous Javascript that you write should return a Promise.

For Lightning Components, the most common example is probably when calling Apex.

The standard pattern for Apex would be something along the lines of:

getData : function( component ) {
    let action = component.get(“c.getData");
        
    action.setCallback(this, function(response) {
            
        let state = response.getState();

        if (state === "SUCCESS") {
            let result = response.getReturnValue();
            // do your success thing
        }
        else if (state === "INCOMPLETE") {
            // do your incomplete thing
        }
        else if (state === "ERROR") {
            // do your error thing
        }
    });
    $A.enqueueAction(action);
}

In order to utilise Promises in a such a function you would:
  1. Ensure the function returned a Promise object
  2. Call 'resolve' or 'reject' based on whether the function was successful

getData : function( component ) {
    return new Promise( $A.getCallback(
        ( resolve, reject ) => {

        let action = component.get(“c.getData");
    
        action.setCallback(this, function(response) {
            
            let state = response.getState();

            if (state === "SUCCESS") {
                let result = response.getReturnValue();
                // do your success thing
                resolve();
            }
            else if (state === "INCOMPLETE") {
                // do your incomplete thing
                reject();
            }
            else if (state === "ERROR") {
                // do your error thing
                reject();
            }
        });
        $A.enqueueAction(action);
    });
}

You would then call the helper method in the same way as usual

doInit : function( component, event, helper ) {
    helper.getData();
}

So, what are we doing here?

We have updated the helper function so that it now returns a Promise that is constructed with a new function that has two parameters 'resolve' and 'reject'. When the function is called, the Promise is returned and the function that we passed in is immediately executed.

When our function reaches its notional 'success' state (inside the 'state == "SUCCESS" section), we call the 'resolve' function that is passed in.

Similarly, when we get to an error condition, we call 'reject'.

In this simple case, you'll find it hard to see where 'resolve' and 'reject' are defined - because they're not. In this case the Promise will create an empty function for you and the Promise will essentially operate as if it wasn't there at all. The functionality hasn't changed.

Aside - if you're unfamiliar with the 'Arrow Function' notation - E.g. () => { doThing() } - then look here or here. And don't forget to check the browser compatibility.

So the obvious question is.. Why?


What does a Promise give you in such a situation?

Well, if all you are doing it calling a single function that has no dependant children, then nothing. But let's say that you wanted to call "getConfiguration", which called some Apex, and then *only once that was complete* you called "getData".

Without Promises, you'd have 2 obvious solutions:
  1. Call "getData" from the 'Success' path of "getConfiguration".
  2. Pass "getData" in as a callback on "getConfiguration" and call the callback in the 'Success' path of "getConfiguration"
Neither of these solutions are ideal, though the second is far better than the first.

That is - in the first we introduce an explicit dependancy between getConfiguration and getData. Ideally, this would not be expressed in getConfiguration, but rather in the doInit (or a helper function called by doInit). It is *that* function which decides that the dependancy is important.

The second solution *looks* much better (and is), but it's still not quite right. We now have an extra parameter on getConfiguration for the callback. We *should* also have another callback for the failure path - otherwise we are expressing that only success has a further dependancy, which is a partial leaking of knowledge.

Fulfilling your Promise - resolve and reject


When we introduce Promises, we introduce the notion of 'then'. That is, when we 'call' the Promise, we are able to state that something should happen on 'resolve' (success) or 'reject' (failure), and we do it from *outside* the called function.

Or, to put it another way, 'then' allows us to define the functions 'resolve' and 'reject' that will get passed into our Promise's function when it is constructed.

E.g.

We can pass a single function into 'then', and this will be the 'resolve' function that gets called on success.

doInit : function( component, event, helper ) {
    helper.getConfiguration( component )
      .then( () => { helper.getData( component ) } );
}

Or, if we wanted a failure path that resulted in us calling 'helper.setError', we would pass a second function, which will become the 'reject' function.

doInit : function( component, event, helper ) {
    helper.getConfiguration( component )
      .then( () => { helper.getData( component ) }
           , () => { helper.setError( component ) } );
}

Aside - It's possible that the functions should be wrapped in a call to '$A.getCallback'. You will have seen this in the definition of the Promise above. This is to ensure that any callback is guaranteed to remain within the context of the Lightning Framework, as defined here. I've not witnessed any problem with not including it, although it's worth bearing in mind if you start to get issues on long running operations.

Now, this solution isn't vastly different to passing the two functions directly into the helper function. E.g. like this:

doInit : function( component, event, helper ) {
    helper.getConfiguration( component
                           , () => { helper.getData( component ) }
                           , () => { helper.setError( component ) } );
}

And whilst I might say that I personally don't like the act of passing in the two callbacks directly into the function, personal dislike is probably not a good enough reason to use a new language feature in a business critical system.

So is there a better reason for doing it?

Promising everything, or just something


Thankfully, Promises are more than just a mechanism for callbacks, they are a generic mechanism for *guaranteeing* that 'settled' (fulfilled or rejected) Promises result in a specified behaviour occurring once certain states occur.

When using a simple Promise, we are simply saying that the behaviour should be that the 'resolve' or 'reject' functions get called. But that's not the only option

. For example, we also have:
Promise.allWill 'resolve' only when *all* the passed in Promises resolve, and will 'reject' if and when *any* of the Promises reject.
Promise.raceWill 'resolve' or 'reject' when the first Promise to respond comes back with a 'resolve' or 'reject'.
Once we add that to the mix, we can do something a little clever...

How about having the component load with a 'loading spinner' that is only switched off when all three calls to Apex respond with success:

doInit : function( component, event, helper ) {
    Promise.all( [ helper.getDataOne( component )
                 , helper.getDataTwo( component )
                 , helper.getDataThree( component ) ] )
           .then( () => { helper.setIsLoaded( component ) } );
}

Or even better - how about we call getConfiguration, then once that’s done we call each of the getData functions, and only when all three of those are finished do we set the flag:

doInit : function( component, event, helper ) {
    helper.getConfiguration( component )
        .then( Promise.all( [ helper.getDataOne( component )
                            , helper.getDataTwo( component )
                            , helper.getDataThree( component ) ] )
                      .then( () => { helper.setIsLoaded( component ) } )
             );
}

Now, just for a second, think about how you would do that without Promises...

Saturday, December 13, 2014

Throw it away - Why you shouldn't keep your POC

"Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

It can be tempting, whilst answering these questions to become attached to the code that you generate.

I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

Why do we set out on a proof of concept?

The purpose of a proof of concept is to (by definition):

  * Prove:  Demonstrate the truth or existence of something by evidence or argument.
  * Concept: An idea, a plan or intention.

In most cases, the concept being proven is a technical one.  For example:
  * Will this language be suitable for building x?
  * Can I embed x inside y and get them talking to each other?
  * If I put product x on infrastructure y will it basically stand up?

They can also be functional, but the principles remain the same for both.

It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

This is quite different to what we set out to do in our normal software development process. 

We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

What process do we follow when embarking on a proof of concept?

Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

To illustrate:

Will this language be suitable for building x?

You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

Can I embed x inside y and get them talking to each other

You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

If I put product x on infrastructure y will it basically stand up?

You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

Production development and Proof of Concept development is not the same

The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

That is the fundamental difference, and why you should throw your code away.

Monday, May 17, 2010

Pleasing line

Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...



$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');


Tuesday, September 04, 2007

Database Build Script "Greatest Hits"

I know its been a quiet time on this blog for a while now, but I've noticed that I'm still getting visitors looking up old blog posts. It's especially true of the posts that relate to "The Patch Runner". Many of them come through a link from Wilfred van der Deijl, mainly his great post of "Version control of Database Objects". The patch runner is my grand idea for a version controlled database build script that you can use to give your developers sandbox databases to play with as well as ensuring that your live database upgrades work first time, every time. It's all still working perfectly here, and people still seem to be interested, so with that in mind I've decided to collate them a little bit. basically provide an index of all the posts I've made over the years that directly relate to database build scripts, sandboxes and version control. So, Rob's database build script 'Greatest Hits': All of the posts describe processes and patch runners that are very similar to those that I use in my work every day. I started playing with these theories over 3 years ago now and there is no way I'd go back to implement database upgrades the way I did before. However, I'd LOVE to hear ideas on how things can be improved. I'd be amazed if my three year old thinking was still up to date! Technorati Tags: , , , , ,

Thursday, July 12, 2007

Can a change in execution plan change the results?

We've been using Oracle Domain indexes for a while now in order to search documents to get back a ranked order of things that meet certain criteria. The documents are releated to people, and we augment the basic text search with other filters and score metrics based on the 'people' side of things to get an overall 'suitability' score for the results in a search. Without giving too much away about the business I work with I can't really tell you much more about the product than that, but it's probably enough of a background for this little gem. We've known for a while that the domain index 'score' returned from a 'contains' clause is based not only on the document to which that score relates, but also on the rest of the set that is searched. An individual document score does not live in isolation, rather in lives in the context of the whole result set. No problem. As I say, we've known this for a while and so have our customers. Quite a while ago they stopped asking what the numbers mean and learned to trust them. However, today we realised something. Since the results are affected by the result set that is searched, this means that the results can be affected by the order in which the optimizer decides to execute a query. I can't give you a full end to end example, but I can assure you that the following is most definately the case on one of our production domain indexes (names changed, obviously): We have a two column table 'document_index', which contains 'id' and 'document_contents'. Both columns have an index. The ID being the primary key and the other being a domain index. The following SQL gives the related execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, :1, 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX DOMAIN INDEX SCOTT.DOCUMENT_INDEX_IDX01 However, the alternative SQL gives this execution path: SELECT id, SCORE( 1 ) FROM document_index WHERE CONTAINS( document_contents, 'Some text', 1 ) > 0 AND id = :2 SELECT STATEMENT TABLE ACCESS BY INDEX ROWID SCOTT.DOCUMENT_INDEX INDEX UNIQUE SCAN SCOTT.DOCUMENT_INDEX_PK Normally, this kind of change in execution path wouldn't be a problem. But as stated earlier, the result of a score operation against a domain index is not just dependant on the individual records, but the context of the whole result set. The first execution provides you a score for the single document in the context of the all the documents in the table, the second gives you a score within the context of just that document. The scores are different. Now obviously, this is an extreme example, but more subtle examples will almost certainly exist if you combine the domain index lookups with any other where clause criteria. This is especially true if you're using literal values instead of bind variables in which case you may find the execution path changing between calls to the 'same' piece of SQL. My advice? Well, we're going to split our domain index look ups from all the rest of the filtering criteria, that way we can prepare the set of documents we want the search to be within and know that the scoring algorithm will be applied consistently.

Wednesday, July 04, 2007

Handy "Alert Debugging" tool

One of the coolest things about OO Javascript is that methods can be written to as if they are variables. This means that you can re-write functions on the fly. Bad for writing maintainable code if you're not structured; Fantastic for things like MVC controllers (rather use the controller to forward calls on to the model, you use it to rewire the view so that it calls it directly, and all without the view even realising it!). What I didn't realise was that the standard window object (and probably so many others out there) can have its methods overwritten like any other. Probably the simplest example of that proves to be incredibly useful... changing the alert function so that the dialog becomes a confirm window. Clicking cancel means that no further alerts are shown to the user. Great for when you're writin Javascript without a debugger and have to resort to 'alert debugging'.
window.alert = function(s) {
 if( !confirm(s) ) window.alert = null;
}
In case you're wondering... I found it embedded in the comments on this post: http://www.joehewitt.com/blog/firebug_for_iph.php. Cheers Menno van Slooten