Friday, 15 December 2017

Santa Force is Coming to Town

Santa Force is Coming to Town

Santa

Introduction

This post comes all the way from Lapland, from the workshop of Santa Force. a long time Salesforce user. This Salesforce instance has received some enhancements to help with the unique problems of this unique non-profit, which we’ll take a closer look at.

Customisations

There are a few additional fields on user, which don’t necessarily make a lot of sense when viewed in isolation:

Screen Shot 2017 12 15 at 09 59 47

However, they are vital for a formula field :

 

Screen Shot 2017 12 15 at 10 01 33

 

So as you can see, you’d better watch out, not pout and not cry. This might seem an odd requirement, but the help text tells you why:

 

Screen Shot 2017 12 15 at 10 02 06

 

 

On to Santa Force now, he’s making a list view. The elves created one a few days ago, but there’s some issues with it - the name doesn’t look right and there’s a few fields missing.

 

Screen Shot 2017 12 15 at 10 37 42

 

Santa Force clones the list view, renames it and adds the required fields:

 

Screen Shot 2017 12 15 at 10 36 55

 

This is much better - he’s making a list view, he’s checked it twice and can now see who has been naughty or nice.

 

There’s also a process builder that works off another custom field on the contact record - a checkbox field labelled Asleep?

 

Screen Shot 2017 12 15 at 10 41 39

 

So Santa Force knows if you are sleeping, and knows if you are awake, because it is posted to his chatter feed:

Screen Shot 2017 12 15 at 10 17 35

 

 

Finally, there’s one more field on contact - Goodness.  Santa Force can look at this field and determine if you’ve been good or bad - there’s also some useful help text that will guide a contact’s behaviour if they view this though the community.

 

Screen Shot 2017 12 15 at 10 19 03

 

Why?

The key question is what is all this information being gathered for? Checking the calendar, we can see that it’s for an event scheduled for 24th December:

 

Screen Shot 2017 12 15 at 10 22 16

 

As you can see, Santa Force is Coming to Town!

Happy Christmas everyone and thanks for reading the Bob Buzzard Blog.

 

Tuesday, 12 December 2017

SFDX and the Metadata API

SFDX and the Metadata API

Introduction

SFDX became Generally Available in the Winter 18 Release of Salesforce and I was ready for it. However, my use case was our BrightMedia appcelerator which is mostly targeted at sandboxes and production orgs, where scratch orgs wouldn’t really help that much. The good news is that the SFDX CLI has support for metadata deploy/retrieve operations via the mdapi commands in the force topic.

What you need

In order to deploy metadata you need the directory structure and package.xml manifest - if you’ve used the Force.com migration tool (ant) or the Force CLI, this should be familiar. For the purposes of this blog I’m using the GITHUB repository from my Take a Moment blog post, which has the following structure:

src/
src/package.xml
src/aura/
src/aura/TakeAMoment
src/aura/TakeAMoment/TakeAMoment.cmp
src/aura/TakeAMoment/TakeAMoment.cmp-meta.xml
src/aura/TakeAMoment/TakeAMoment.css
src/aura/TakeAMoment/TakeAMomentController.js
src/aura/TakeAMoment/TakeAMomentHelper.js
src/aura/TakeAMoment/TakeAMomentRenderer.js

What you do

The first thing I do is clone the repo to my local filesystem and navigate to the directory created:

 > git clone https://github.com/keirbowden/TakeAMoment.git
Cloning into 'TakeAMoment'...
remote: Counting objects: 20, done.
remote: Total 20 (delta 0), reused 0 (delta 0), pack-reused 20
Unpacking objects: 100% (20/20), done.
> cd TakeAMoment

I then set this up as an SFDX project:

> sfdx force:project:create -n .
create sfdx-project.json
conflict README.md
force README.md
create config/project-scratch-def.json

Next I login to one of my dev orgs:

> sfdx force:auth:web:login
Successfully authorized keirbowden@googlemail.com with org ID …..
You may now close the browser

(For the purposes of this blog my login is ‘keirbowden@googlemail.com’ - substitute your username in the commands below)

Everything is now set up and I can deploy to my dev org:

> sfdx force:mdapi:deploy -d src -u keirbowden@googlemail.com
2884 bytes written to /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/src.zip using 36.913msDeploying /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/src.zip...
=== StatusStatus:  Queuedjobid:  0Af80000003ynf6CAA
The deploy request did not complete within the specified wait time [0 minutes].To check the status of this deployment, run "sfdx force:mdapi:deploy:report"

Sometimes the deployment completes immediately, but most of the time it takes a bit longer and I have to query the status via the command that the SFDX CLI helpfully gives me in the output:

> sfdx force:mdapi:deploy:report
=== Result
Status: Succeeded
jobid: 0Af80000003ynf6CAA
Completed: 2017-12-12T16:28:39.000Z
Component errors: 0
Components checked: 1
Components total: 1
Tests errors: 0
Tests completed: 0
Tests total: 0
Check only: true

And that’s it - my deployment is done!

Why would you do this?

That’s a really good question. For me, the following reasons are good enough:

  1. The SFDX CLI, unlike the Force Migration Tool, uses oauth to authorise operations, so I don’t need to specify the password in plaintext. It also means that the rest of my team don’t need to learn ANT.
  2. The SFDX CLI, unlike the Force CLI, allows me to fire the deployment off and query the status later, plus it gives me a lot of information in the report.

It’s also clear to me that SFDX is the future, so aligning myself with the SFDX CLI seems a sensible move.

It also allows me to get the status of the deployment as JSON:

> sfdx force:mdapi:deploy:report --json

which gives me a ton of information:

{
  "status": 0,
  "result": {
    "checkOnly": false,
    "completedDate": "2017-12-12T16:28:39.000Z",
    "createdBy": "00580000001ju2C",
    "createdByName": "Keir Bowden",
    "createdDate": "2017-12-12T16:28:09.000Z",
    "details": {
      "componentSuccesses": [
        {
          "changed": "true",
          "componentType": "AuraDefinitionBundle",
          "created": "true",
          "createdDate": "2017-12-12T16:28:36.000Z",
          "deleted": "false",
          "fileName": "src\/aura\/TakeAMoment",
          "fullName": "TakeAMoment",
          "id": "0Ab80000000PEGWCA4",
          "success": "true"
        },
        {
          "changed": "true",
          "componentType": "",
          "created": "false",
          "createdDate": "2017-12-12T16:28:38.000Z",
          "deleted": "false",
          "fileName": "src\/package.xml",
          "fullName": "package.xml",
          "success": "true"
        }
      ],
      "runTestResult": {
        "numFailures": "0",
        "numTestsRun": "0",
        "totalTime": "0.0"
      }
    },
    "done": true,
    "id": "0Af80000003ynf6CAA",
    "ignoreWarnings": false,
    "lastModifiedDate": "2017-12-12T16:28:39.000Z",
    "numberComponentErrors": 0,
    "numberComponentsDeployed": 1,
    "numberComponentsTotal": 1,
    "numberTestErrors": 0,
    "numberTestsCompleted": 0,
    "numberTestsTotal": 0,
    "rollbackOnError": true,
    "runTestsEnabled": "false",
    "startDate": "2017-12-12T16:28:29.000Z",
    "status": "Succeeded",
    "success": true
  }
}

Having the results in JSON also means that I can easily process it in JavaScript, which I’ll cover in my next post.

Related Posts

 

Monday, 20 November 2017

Animated Lightning Progress Bar

Animated Lightning Progress Bar

Introduction

The Salesforce Lightning Design System has a progress bar component, which can be used to communicate how far through a process the user is, or how close to achieving their target audience they are in BrightMedia. Typically this will be wired up to an attribute so that it updates automatically when the attribute value changes, for example:

<aura:application extends="force:slds" >
    <aura:attribute name="value" type="Integer" default="25" />
    <div class="slds-m-around_small">
        <div class="slds-text-heading_large slds-m-bottom_small">Progress Bar Demo</div>
        <div class="slds-m-bottom_large">
            <label>Enter value : <ui:inputNumber value="{!v.value}"/></label>
        </div>
        <div>
            <div style="width:25%" class="slds-progress-bar slds-progress-bar_circular slds-progress-bar_large"
                    aria-valuemin="0" aria-valuemax="100" aria-valuenow="{!v.value}" role="progressbar">
                <span class="slds-progress-bar__value" style="{! 'width:  ' + v.value + '%;'}">
                    <span class="slds-assistive-text">{!'Progress: ' + v.value + '%'}</span>
                </span>
                <div class="slds-text-align--center"><ui:outputNumber value="{!v.value}"/>
                   /
                <ui:outputNumber value="100"/></div>
            </div>
        </div>
    </div>
</aura:application>

which jumps the progress bar to the specified value:

while this works fine , it's not the greatest user experience. When a progress bar updates I prefer to see an animated version where it gradually makes it's way to the final value. There's no difference functionality-wise, but it just looks better to me.

The Animator

Animating a progress bar in JavaScript is simply a matter of making small changes to move between the current and desired value, typically via a timer that fires a function every ’n’ milliseconds to advance the value by a small amount. When using Lightning Components this is a little more tricky as the function executed by the timer is modifying the component outside of the framework lifecycle. In the revised app, when the user changes the desired value this is stored in a separate attribute and a controller function is executed:

<aura:application extends="force:slds" >
    <aura:attribute name="value" type="Integer" default="25" />
    <aura:attribute name="inputVal" type="Integer" default="25" />
    <aura:attribute name="timeoutRef" type="object" />
    <div class="slds-m-around_small">
        <div class="slds-text-heading_large slds-m-bottom_small">Progress Bar Demo</div>
        <div class="slds-m-bottom_large">
            <label>Enter value : <ui:inputNumber value="{!v.inputVal}" change="{!c.valueChanged}" /></label>
        </div>
        <div>
            <div style="width:25%" class="slds-progress-bar slds-progress-bar_circular slds-progress-bar_large"
                 aria-valuemin="0" aria-valuemax="100" aria-valuenow="{!v.value}" role="progressbar">
                <span class="slds-progress-bar__value" style="{! 'width:  ' + v.value + '%;'}">
                    <span class="slds-assistive-text">{!'Progress: ' + v.value + '%'}</span>
                </span>
                <div class="slds-text-align--center"><ui:outputNumber value="{!v.value}"
                    /
                >/<ui:outputNumber value="100"/></div>
            </div>
        </div>
    </div>
</aura:application>

following best practice, the controller method simply delegates to the associated helper

({
	valueChanged : function(component, event, helper) {
        helper.valueChanged(component, event);
	}
})

which does the actual work :

({
    valueChanged : function(cmp, ev) {
        var times=0;
        var current=cmp.get('v.value');
        var final=cmp.get('v.inputVal');
        var increment=1;
        if (final<current) {
            increment=-1;
        }
        var self=this;
        var timeoutRef = window.setInterval($A.getCallback(function() {
            if (cmp.isValid()) {
                var value=cmp.get('v.value');
                value+=increment;
                if (value==final) {
                    window.clearInterval(cmp.get('v.timeoutRef'));
                    cmp.set('v.timeoutRef', null);
                }
                cmp.set('v.value', value);
            }
        }), 100);
        cmp.set('v.timeoutRef', timeoutRef);
    }
})

the first part of the helper function simply captures the start and end values and figures out if we need to increment or decrement from the current value.  Next the timer is set up to repeat every 100 milliseconds. As the function executed by the timer changes the app component attributes, I have to wrap it in a $A.getCallback function call, which ensures that the lightning components framework rerenders the markup. Once the current value equals the desired final value, the timer is cleared otherwise it will fire forever more.

Change values with care

Refreshing the app now animates the progress bar to apply the changed value. Incrementing by 1 is probably overkill, especially if you are dealing with values of hundreds of thousands for example. In this situation I’d simply decide how many “jumps” I wanted to apply to the progress bar, divide the difference between the current and desired value by the number of jumps and then add the result to the value each time the timer fired.

Related posts

 

Saturday, 21 October 2017

Programming against Apex Interfaces

Programming against Apex Interfaces

Interfaces

Introduction

In the dim and distant past (getting on for 10 years ago now), when I was a Java programmer working in the systems integration and financial services space, the majority of the programming that I did was against interfaces rather than concrete class instances. This isn’t something that I see very often in the Salesforce world (at least on the SI side - I’d imagine ISVs go in for it a lot more) for a variety of reasons including:

  • Lots of Salesforce work is a point solution on an existing implementation, so it doesn’t always merit abstraction via interfaces.
  • Customers don’t want to the extra cost and accept that additional development effort may be required in future.
  • Many people have ended up as Salesforce developers through non-traditional routes and haven’t been exposed to interfaces.

Simply put, programming against interfaces means that rather than identifying the specific class that carries out an operation in your code, you identify an interface and rely on there being a class that implements that interface at run time. This allows your code to focus on what the operation needs to do rather than how it is accomplished.

Why?

Programming against interfaces introduces flexibility. You can change the implementation of the interface without affecting any of the code that uses the interface. Thus you could start out with a faker implementation that simply returns canned data to allow you to develop the real world interface implementation and the consuming code in parallel. It also means you can have multiple implementations of an interface inside a single system and swap them around via configuration.

Code me. Now.

The scenario for the example is calculating the discount due to an account. The customer has told us that at the moment it is a flat 10%, but this is an area that they will want to change in the future to take into account the account’s industry. This is a classic use case for an interface as we know that we will need to swap out the implementation in the future, and if it changes once it is like to change again in the future.

Concrete classes

My initial implementation of the class to calculate the discount is as follows:

public class SimpleAccountDiscount {
	public double getDiscount(Id accountId) {
        return 10;
    }
}

and I can use this directly in code:

double discount=new SimpleAccountDiscount().getDiscount('00124000004N1TfAAK');
System.debug('Discount = ' + discount);

producing the following output:

07:02:45:045 USER_DEBUG [2]|DEBUG|Discount = 10.0

all well and good. Next I create the more complex version which takes the industry into account - yes, complex is probably over-egging it a bit, but all things are relative.

public class ComplexAccountDiscount {
    public double getDiscount(Id accountId) {
        // default value
        Double discount=10;
        Account acc=[select id, Industry
                     from Account
                     where id=:accountId];

        if (acc.Industry=='Apparel’) {
            discount=15;
        }
        else if (acc.Industry=='Consulting’) {
            discount=5;
        }
        
        return discount;
    }
}

And I can use this in code just as easily :

System.debug('Discount for Burlington (apparel) = '
             + new ComplexAccountDiscount().getDiscount('00124000004RIGFAA4'));

System.debug('Discount for Dickenson (consulting) = '
             + new ComplexAccountDiscount().getDiscount('00124000004RIGHAA4'));

produces the output:

17:14:25:078 USER_DEBUG [1]|DEBUG|Discount for Burlington (apparel) = 15.0
17:14:25:081 USER_DEBUG [4]|DEBUG|Discount for Dickenson (consulting) = 5.0

However, when the customer is ready to move to the more complex version, I need to carry out a deployment in order to start using my new class. Plus if it turned out that a downstream system wasn’t ready for the change, I’d have to carry out another deployment to revert to the simple version. 

Implementing an interface

The following Apex interface reflects the method that must be exposed by any discount implementation:

public interface AccountDiscountInterface
{
    double GetDiscount(Id accountId);
}

Note that I don’t specify an access modifier for the method - as an interface reflects the public interface, the methods in the interface are implicitly public. I then modify may classes (only the complex version shown):

public class ComplexAccountDiscount implements AccountDiscountInterface {

I can then use the interface in place of a concrete class:

AccountDiscountInterface adi=new ComplexAccountDiscount();
System.debug('Discount for Burlington (apparel) = '
             + adi.GetDiscount('00124000004RIGFAA4'));

So my code that gets the discount doesn’t care about the implementation, but the previous line that instantiates the concrete class does. A partial success at most.

Dynamically instantiating a class

The final piece of the puzzle is dynamically instantiating a class based on configuration. If I can do this, my customer can switch implementations simply by changing a custom setting. Dynamical instantiation consists of two parts. First Type.forName() is used to get the type of the Apex class. Then the newInstance() method of the resulting Type is executed to create an instance of the named class. I’ve placed the name of the implementing class into an instance of the Account_Discount_Setting__c custom setting named ‘Default’. This has a field called Implementing_Class__c that I’ve set to ‘ComplexAccountDiscount’:

Account_Discount_Setting__c setting=
    Account_Discount_Setting__c.getInstance('Default');

Type impl = Type.forName(setting.Implementing_Class__c);
AccountDiscountInterface adi=
    (AccountDiscountInterface) impl.newInstance();

System.debug('Discount for Burlington (apparel) = '
             + adi.GetDiscount('00124000004RIGFAA4'));

Now my code has no knowledge of the class that is implementing the discount calculation, it simply creates whatever class has been configured and uses that. If my customer wants to switch the implementation, it’s as simple as changing a field in a custom setting. It also allows me to set up a different version for testing - unit tests should be as simple as possible so if I have a genuinely complex discount implementation I probably don’t want to use that when testing the consuming code in case it has a side effect that my test isn’t expecting - I’d still test that implementation, but in it’s own unit tests.

So these should be used always?

As I mentioned earlier, you don’t always need an interface. They do add a little overhead, as I now have an additional custom setting and interface to create and deploy. If the implementation is never likely to change then there’s no point in abstracting it away like this. We use them a lot in BrightMedia as it allows us to have a selection of implementations for services that customers can chose between. 

 

 

 

 

Saturday, 23 September 2017

Taking a Moment with a Lightning Component

Taking a Moment with a Lightning Component

Metime

Looking after yourself

A huge amount of effort nowadays goes in to making people more productive. New AI features on the Salesforce platform figure out the key leads and deals so that reps focus on the right tasks, more information is pushed to employees to help their decision making, and barely 10 minutes goes by without a new listicle being published on Medium promising to teach you life hacks so that you can spend more of your time carrying out that sweet, sweet work. It’s easy to forget that you are more productive if you look after yourself while working. Sitting down staring at a screen for hours on end can cause you real damage, but as it’s not an instant, acute pain more often than not you don’t realise it. This is where the systems that you work with should be helping you, rather than trying to squeeze more out of you.

TakeAMoment (Lighting Component)

When you create a Lightning application in Salesforce, you can add a utility bar. Lightning components in this utility bar have special powers - they persist across the various tabs and pages of the application, unaffected by your navigation. Using the “Load in background when the app opens” setting means that the component can know the point at which you started the application and can make some helpful suggestions after specific periods of time have elapsed.

The TakeAMoment lightning component starts a JavaScript interval timer upon initialisation that fires every minute. When the timer fires, the component then figures out how long it’s been since the user started the application and, if it is a specific amount of time, suggests that the user take a moment to do something to benefit themselves. For example, after 20 minutes the user is advised to blink 10 times to ensure they don’t dry their eyes out. Here’s a video of the component doing it’s thing - note that in time honoured fashion for anything tech related, the sequence has been shortened:

(The ‘Come with me if you want to live’ initial message is because I had The Terminator on in the background while I wrote the component - I thnk it grabs the user’s attention!).

Note that I’m using the variant of the toast message that allows me to include a clickable URL - each suggested activity contains a link to a page on the web explaining why it’s a good idea - at the end of the video I click through to the story in the New York Times that explains why getting up and walking around for 5 minutes every hour is a good thing.

The you can view the component at the github repo shown below - it’s pretty basic at the moment, the timings and messages are all hardcoded, but this has the advantage that there’s no interaction with the server - everything takes place on the client. If you want to use it, create or edit a lightning app (I just added the utility bar component to the standard Sales app) and add the TakeAMoment component to the utility bar. Make sure to check the ‘Load in the background when the app opens’, otherwise the user has to click the component in the utility bar to initialise it.

Obviously as well as looking after my users, this component can also be used for nefarious purposes, typically by my evil co-worker - regular messages telling users they aren’t working hard enough, that winners work through lunch, family dinners are overrated etc. Sadly this is the nature of most if not all technology.

Related Posts

 

Sunday, 20 August 2017

Lightning Testing Service Part 2 - Custom Jasmine Reporter

Lightning Testing Service Part 2 - Custom Jasmine Reporter

Customjasmine

(Note: This blog applies to the Jasmine variant of the Lightning Testing Service)

Introduction

One of the cool things about Jasmine is how easy it is to add your own reporter. Compared to some of the other JavaScript testing frameworks I’ve used in the past, it’s entirely straightforward. Essentially you are implementing an interface, although as JavaScript doesn’t have interfaces it’s very much based on what you should rather than must implement. A Jasmine Reporter is a JavaScript object with the appropriate functions for the framework to call when something interesting happens. Even cooler is the fact that the framework first checks that you have provided the function before it is invoked, so if you don’t care about specific events, you just leave out the functions to handle those events and you are all good.

Functions

Some or all of the following functions are required to handle the various events that occur as test are executed - basically things commencing and completing:

  • jasmineStarted/jasmineDone - called before any specs (tests) execute/once all tests have completed
  • suiteStarted/suiteDone - called before a specific suite (group of tests) execute/once they have completed
  • specStarted/specDone - called before a specific test executes/once it has completed

Once you have your object with the desired functions, it must be registered before any tests are queued:

jasmine.getEnv().addReporter(your_reporter);

and that’s all there is to it.

Example

Below is an example lightning component that creates a simple reporter to capture the success/failure of each suite/spec and log information to the console. Note that this relies on the Jasmine artefacts installed by the latest version of the Lightning Testing Service unmanaged package. The component is named KABJasmineReporter:

Component

<aura:component extensible="true">
    <ltng:require scripts="{!join(',',
				$Resource.lts_jasmine + '/lib/jasmine-2.6.1/jasmine.js',
				$Resource.lts_jasmine + '/lib/jasmine-2.6.1/jasmine-html.js',
				$Resource.lts_jasmineboot
				)}"
                  afterScriptsLoaded="{!c.doInit}" />
</aura:component>

Controller

({
	doInit : function(component, event, helper) {
		helper.initialiseJasmineReporter(component, event);
	}
})

Helper

({
    myReporter : {
        content : '',
        suites : [],
        totalSuccesses:0,
        totalFailures:0,
        totalTests:0,
        output : function(message) {
            console.log(message);
            this.content+=message;
        },
        clear: function() {
            this.content='';
            this.suites=[];
            this.totalSuccesses=0;
            this.totalFailures=0;
            this.totalTests=0;
        },
        getCurrentSuite: function() {
            return this.suites[this.suites.length-1];
        },
        getCurrentSpec : function() {
            return this.getCurrentSuite().specs[this.getCurrentSuite().specs.length - 1];
        },
        jasmineStarted: function(suiteInfo) {
            this.output('Running suite with ' + suiteInfo.totalSpecsDefined + ' specs');
        },
        suiteStarted: function(result) {
            this.output('Suite started: ' + result.description + ' whose full description is: ' + result.fullName);
            this.suites.push({name : result.fullName,
                              specs : []});
        },
        specStarted: function(result) {
            this.output('Spec started: ' + result.description + ' whose full description is: ' + result.fullName);
            this.getCurrentSuite().specs.push({name: result.description,
                                               failures: [],
                                               failureCount: 0,
                                               successes: 0});
        },
        specDone: function(result) {
            this.output('Spec: ' + result.description + ' complete status was ' + result.status);
            this.output(result.failedExpectations.length + ' failures');
            for(var i = 0; i < result.failedExpectations.length; i++) {
                var failure=result.failedExpectations[i];
                this.output('Failure: ' + failure.message);
                this.output(failure.stack);
                this.getCurrentSpec().failures.push({message: failure.message,
                                                     stack : failure.stack});
                this.getCurrentSpec().failureCount++;
                this.totalFailures++;
            }
            this.output(result.passedExpectations.length + ' successes');
            this.getCurrentSpec().successes+=result.passedExpectations.length;
            this.totalSuccesses+=result.passedExpectations.length;
        },
        suiteDone: function(result) {
            this.output('Suite: ' + result.description + ' was ' + result.status);
            for(var i = 0; i < result.failedExpectations.length; i++) {
                this.output('AfterAll ' + result.failedExpectations[i].message);
                this.output(result.failedExpectations[i].stack);
            }
        },
        jasmineDone: function() {
            this.totalTests=this.totalSuccesses+this.totalFailures;
	        this.output('Finished tests');
    	    this.output('Successes : ' + this.totalSuccesses);
	        this.output('Failures : ' + this.totalFailures);
	        this.output('Details : ' + JSON.stringify(this.suites, null, 4));
        }
    },
    initialiseJasmineReporter : function(component, event) {
        console.log('Initialising jasmine reporter');
        var self=this;
        this.myReporter.clear();
        var env = jasmine.getEnv();
        jasmine.getEnv().addReporter(this.myReporter);
    }
})

A couple of tweaks to the jasmineTests app to include my reporter (and to limit to a couple of tests, otherwise there’s a lot of information in the console log):

App

<aura:application >
    <c:KAB_JasmineReporter />
    <c:lts_jasmineRunner testFiles="{!join(',',
    	$Resource.jasmineHelloWorldTests
    )}" />
</aura:application>

Executing the app produces the following console output:

Initialising jasmine reporter
Running suite with 2 specs
Suite started: A simple passing test whose full description is: A simple passing test
Spec started: verifies that true is always true whose full description is: A simple passing test verifies that true is always true
Spec: verifies that true is always true complete status was passed
0 failures
1 successes
Suite: A simple passing test was finished
Suite started: A simple failing test whose full description is: A simple failing test
Spec started: fails when false does not equal true whose full description is: A simple failing test fails when false does not equal true
Spec: fails when false does not equal true complete status was pending
0 failures
0 successes
Suite: A simple failing test was finished
Finished tests
Successes : 1
Failures : 0
Details : [
    {
        "name": "A simple passing test",
        "specs": [
            {
                "name": "verifies that true is always true",
                "failures": [],
                "failureCount": 0,
                "successes": 1
            }
        ]
    },
    {
        "name": "A simple failing test",
        "specs": [
            {
                "name": "fails when false does not equal true",
                "failures": [],
                "failureCount": 0,
                "successes": 0
            }
        ]
    }
]

Conclusion

While this has been a simple example, there’s a lot more that can be done with custom reporters, such as posting notifications with the tests results, which I plan to explore in later posts. 

Related Posts

 

Saturday, 5 August 2017

Lightning experience utility bar - add an app for that

Lightning experience utility bar - add an app for that

Introduction

This week I’ve been working on adding utility bar functionality to our BrightMedia appcelerator. Typically when I build functionality of this nature I’ll start off with with the component markup using hardcoded values to get the basic styling right, then make it dynamic with the data coming from the JavaScript controller/helper, before finally wiring it up to an Apex controller that extract data from the Salesforce database, either through sobjects or custom settings.

For the purposes of this blog I’m going to say that it was presenting a list of Trailmixes, the new feature in Trailhead (it wasn’t, but this is a much simpler example and taps into the zeitgeist).

First incarnation

The version of the component that simply displayed a Trailmix with a button to open it:

<aura:component implements="flexipage:availableForAllPageTypes"
                access="global">
    
    <div class="slds-p-around--x-small slds-border--bottom slds-theme--shade">
        <div class="slds-grid slds-grid--align-spread slds-grid--vertical-align-center">
            <div>
                Blog Trailmix
            </div>
            <div>
            </div>
            <div>
                <lightning:buttonIcon iconName="utility:open"
                                      title="Open"
                                      alternativeText="Open" variant="border-filled"/>
            </div>
        </div>
    </div>
</aura:component>

and, not surprisingly, this worked fine:

Screen Shot 2017 08 05 at 16 54 45

Second incarnation

The second version initialised a list of Trailmixes, still containing a single element, in the JavaScript controller, which the component then iterated. First the component:

<aura:component implements="flexipage:availableForAllPageTypes"
                access="global">
    
    <aura:attribute name="mixes" type="Object[]" />
    
    <aura:handler name="init" value="{!this}" action="{!c.doInit}"/>
    
    <aura:iteration items="{!v.mixes}" var="mix">
        <div class="slds-p-around--x-small slds-border--bottom slds-theme--shade">
            <div class="slds-grid slds-grid--align-spread slds-grid--vertical-align-center">
                <div>
                    {!mix.name}
                </div>
                <div>
                </div>
                <div data-record="{!mix.key}">
                    <lightning:buttonIcon onclick="{!c.OpenMix}"
                                          iconName="utility:open"
                                          title="Open"
                                          alternativeText="Open" variant="border-filled"/>
                </div>
            </div>
        </div>
    </aura:iteration>
</aura:component>

Next, the controller

({
	doInit : function(component, event, helper) {
        var mixes[];
        mixes.push({key:"BLOG",
                    name:"Blog Trailmix",
                    link:"https://trailhead.salesforce.com/users/0055000000617DXAAY/trailmixes/basics"});
		component.set('v.mixes', mixes);
    }
})

Here things started to go awry - clicking the utility bar item to open it did nothing and a few seconds later a toast message would appear, with a message along the lines of “we’re still working on your request”, but nothing further. Changing the component in the utility bar configuration to initialise in the background got a little further, but still no content, instead a perpetual spinner:

Screen Shot 2017 08 05 at 17 03 23

 

Viewing the JavaScript console showed nothing out of the ordinary. I’d been having a few problems with my internet connection soI assumed it was either that or Salesforce having an issue, and as it was fairly late at night I decided to leave it until the morning to see if things were resolved. No such luck.

Wrap it and app it

I then did what I usually do when I’m having issues with a lightning component - create an app with just the component in and see what happens then. The app couldn’t be simpler:

<aura:application >
    <c:Trailmixes />
</aura:application>

Previewing this showed that there was an error that was somehow being swallowed:

Screen Shot 2017 08 05 at 17 08 10

which I’m sure the eagle-eyed reader has spotted - the declaration of my mixes variable that eventually gets stored as an attribute was missing an ‘=‘ character:

var mixes[];

After correcting this and refreshing the page a couple of times, and I was back on track with my Trailmix component. 

In conclusion

Always try any troublesome component in an app of it’s own - while in most cases you won’t have the utility bar swallowing errors, it’s way easier to debug a component when there aren’t 50 others on the same page firing events and changing attributes. Also, sometimes a syntax type error was shown in the JavaScript console and sometimes not, so look there first.

Related posts

 

Friday, 21 July 2017

Not Hotdog - Salesforce Einstein Edition

Not Hotdog - Salesforce Einstein Edition

Screen Shot 2017 07 21 at 10 22 16

Introduction

Anyone who is a fan of HBO’s Silicon Valley show will be familiar with Not Hotdog, Jian Yang’s app that determines whether an item of food is a hotdog or not. In a wonderful example of fiction made fact, the show have released iOS and Android applications in real life - you can read about how they did this on their medium post. Around this time I was working through the Build a Cat Rescue App that Recognises Cat Breeds Trailhead Project, which uses Einstein Vision to determine the breed of cat from an image, and it struck me that I could use this technology to develop a Salesforce version of Not Hotdog.

Building blocks

Trailhead Playground

As I’d already set up Einstein Vision and connected it to my Trailhead Playground, I decided to build on top of that rather than create a new developer edition. 

Einstein Vision Apex Wrappers

A key aspect of the project is the salesforce-einstein-vision-apex repository - Apex wrappers for Einstein Vision produced by Developer Evangelist RenĂ© Winkelmeyer. The project somewhat glosses over these, but they provide a really nice mechanism to create and train an Einstein Vision dataset and then use that for predictions. It takes away pretty much all the heavy lifting, so thanks RenĂ©. 

Public Access Community

Let’s be honest, there was no way I was going to build a full-fledged app for this. I did consider building an unmanaged package and including the images I used to train the dataset, but it seemed a bit crazy to have everyone creating and training their own dataset for the same purpose. Given my reach in the Salesforce community this could literally result in tens of duplicate datasets :)

I therefore decided to expose this as an unauthenticated page on a Salesforce community. I had the option of using a Site but I also wanted to play around with unauthenticated access to Lightning Components and the docs say to use a community. 

Putting it all together

I had to make one change to the Einstein Vision Apex Wrappers - I couldn’t get the guest user to be able to access the Salesforce File containing the Einstein Vision key, so I just hardcoded it into the EinsteinVision_PredictionService class. Evil I know, but this is hardly going into production any time soon.

I then created a dataset named ‘nothotdog’ and trained it via a zip file of images. The zip file is organised into a directory per label - in my case there were two directories - ‘Hot Dog’ and ‘Not Hot Dog’.

I then added the following method to the EinsteinVision_Admin class, to match a supplied image in base64 form against the dataset.

@AuraEnabled
public static String GetHotDogPredictionKAB(String base64) {
    String hdLabel='Unable to match hotdog';
    Blob fileBlob = EncodingUtil.base64Decode(base64);
    EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
    EinsteinVision_Dataset[] datasets = service.getDatasets();
    for (EinsteinVision_Dataset dataset : datasets) {
        if (dataset.Name.equals('nothotdog')) {
            EinsteinVision_Model[] models = service.getModels(dataset);
            EinsteinVision_Model model = models.get(0);
            EinsteinVision_PredictionResult result = service.predictBlob(model.modelId, fileBlob, '');
            EinsteinVision_Probability probability = result.probabilities.get(0);
        }
    }
        
    return hdLabel;
}

Next I needed a lightning component that would allow me to upload a file and send it back to the server, to execute the method from above. However, I also wanted this to work from a mobile device as file inputs on the latest Android and iOS allow you to take a picture and use that. The problem with this is that the image files are pretty huge, so I also needed a way to scale them down before submitting them. Luckily this can be achieved by drawing the image to an HTML5 canvas element scaled to the appropriate size.

Unfortunately this threw up another problem, in that when the Locker Service is enabled you don’t have an image element that can be drawn on a canvas, you have a secure element instead. There is no workaround to this so I had to drop the API version of my component down to 39. I guess one day the Locker Service will be finished and everything will work fine.

There’s a fair bit of code in the NotHotdog Lightning Component bundle so rather than making this the world’s longest post you can view it at this gist.

Next, I needed an app to surface the bundle through a Visualforce page. These are pretty simple, the only change to the usual way this is done is to implement the interface ltng:allowGuestAccess:

<aura:application access="GLOBAL" extends="ltng:outApp"
    implements="ltng:allowGuestAccess">

    <aura:dependency resource="c:NotHotDog"/>
</aura:application>

Finally, the Visualforce page that is accessible via the community:

<apex:page docType="html-5.0" sidebar="false" showHeader="false" standardStylesheets="false"
           cache="false" applyHtmlTag="false">
    <html>
        <head>
            <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no;" />
        </head>
        <body>
            <apex:includeLightning />
            <script>
            $Lightning.use("c:NotHotDogApp",
                           function() {
                               $Lightning.createComponent(
                                   "c:NotHotDog",
                                   { },
                                   "lightning",
                                   function(cmp) {
                                   }
                               );
                           }
                          );
            </script>
            <div id="lightning" />
        </body>
    </html>
</apex:page>

Yes we’ve got a video

Here’s a video of the app doing it’s thing - first recognising a hotdog and then correctly determining that the BrightGen head office building is not a hotdog. What a time to be alive.

 

 

It’s not bullet proof

The HBO team trained their app with hundreds of thousands of images, I just did a couple of hundred because this isn’t my day job! It’s pretty good on obvious hotdog images, but not so much when you take photos. Your mileage may vary. Also, take photos on a phone in landscape mode as most of them rotate it.

Try it out yourself

If you’d like to enjoy the majesty of this application on your own machine:

Static qr code without logo 3

Meetup

if you’re in London on Aug 2nd 2017, we’ll have a talk on Einstein Vision at our developer meetup. Sign up at : 

    https://www.meetup.com/LondonSalesforceDevelopers/events/237321315/

Related Information

 
 

 

Saturday, 1 July 2017

Lightning Testing Service Part 1

Lightning Testing Service Part 1

Chuck

Introduction

Back at Dreamforce 16 I gave a talk on Unit Testing Lightning Components using Jasmine. During that talk I said that I hoped that Salesforce would come up with their own testing framework for Lightning Components. I wasn’t disappointed as the Lightning Testing Service (LTS) went into Pilot at the end of May and I was lucky enough to be invited in. It’s been a slight challenge to find enough time to try out LTS while still taking SFDX through it’s paces and making sure I give full attention to my day job, but it’s worth the effort.

The LTS

The LTS is available on github for anyone to try out - I’m not sure how much support you’ll get if you aren’t on the pilot, but I’ve found it works as-is. Now that SFDX is in open beta you can easily combine the two - I’ve just done exactly that and it took around 30 minutes including signing up for trial orgs, downloading the latest SFDX CLI etc. 

The Lightning Testing Service is agnostic about the JavaScript testing framework that you use, but all the samples are based on Jasmine. Having used a few of them I think this is a good idea as Jasmine has a great set of features and most importantly an equivalent for most of the features of the Apex testing framework. The one area that Jasmine is lacking, I think, is the documentation. There are plenty of examples but not that much in the way of explanation as to how the examples actually work. While you can dig into the code as it’s all open source (https://jasmine.github.io/), if you are reasonably new to JavaScript and/or front end unit testing it’s a struggle. I found Jasmine JavaScript Testing by Paulo Ragonha to be an excellent introduction. While the latter chapters of the book focus on React, the first 6 chapters cover generic testing with Jasmine and explain the concepts and features really well (I have no affiliation with the book or author).

Apex eye view

Jasmine concepts map to Apex test concepts as follows: 

ApexJasmineSyntax
Suite Suite
describe('initialise', function () {...})
Test Method Spec
it('gets the accounts', function () {...})
Assert Expectation
expect(component.get("v.status")).toBe("true");
Setup beforeEach/All
beforeEach(function(){...});

Jasmine also has a couple of concepts that Apex doesn’t have:

  • afterEach/All - teardown code to be executed after a spec (afterEach) or the last spec (afterAll). You don’t have this in Apex as test transactions are rolled back so there is nothing to teardown. 
  • Spies - these allow you to stub out a function, track the number of times it has been called, the parameters it was called with. These are really useful when you don’t have transactions that automatically rollback as you need to make sure you stub out anything that might commit to the Salesforce database.

Running Tests

One of the challenges when unit testing on the front end is figuring out how to execute the tests. The route I went was to make it the responsibility of a component to notify a controlling component that it had unit tests and to schedule those tests. There were a couple of downsides to this:

  1. The test code was tightly coupled with the production code so would be deployed to production
  2. The controlling component had to know how many components had tests so that it could wait until the appropriate number had notified it that their tests were queued.

When I presented this I made the point that there are a number of ways of doing this, and the LTS takes a somewhat different approach.

There still has to be a component that is responsible for loading Jasmine, setting up the reporter(s) and managing the tests, and LTS examples have one of these. This component also schedules the tests by loading one or more static resource that contains a collection of Jasmine test suites. As these resources are loaded by the <ltng:require /> tag, the JavaScript code is automatically executed by the browser and schedules the test with the Jasmine runner.

This approach has the upside of decoupling the test code from the actual component, allowing you full control over whether you want to deploy them to production, and removing the requirement for the component executing the tests to know anything about how many tests are being executed. It also allows you to easily group tests into functional areas.

The downside is that it decouples the test code from the actual component, which means that if you want to stub out a method it has to be exposed as part of the components API via an <aura:method /> attribute. I’m not mad keen on this as it feels like I’m exposing the internals for pure testing purposes and I can’t stop my Evil Co-Worker from creating components that use these methods for nefarious purposes. That said, I’m pretty sure it would be possible to leave tests that rely on access to a components internals inside the component itself by dynamically creating the component once the Jasmine framework is all set up. This is something I hope to cover in a later blog post assuming I can get it working!

SFDX Integration

This is probably the coolest aspect of the LTS. The SFDC CLI with Force plugin 40 includes a new command to execute lightning component unit tests :

sfdx force:lightning:test:run

This creates a browser session and navigates to a lightning application (default Tests.app), which executes the tests. The CLI is then able to get at the results of the tests and output them. I’m not sure how this last piece works, but it feels like something you’d need to find a way to replicate if using another JavaScript testing framework.  What it means, however, is that you can include these unit tests in a continuous integration process, thus ensuring that your components work as expected.

That’s it for part 1 - there’s a lot of information at the github repo and there’s no point in me replicating that just to get some eyes on my blog.

Related Posts

 

Saturday, 10 June 2017

Locker Service, Lightning Components and JavaScript Libraries

Locker Service, Lightning Components and JavaScript Libraries

Window

Introduction

As I’ve previously blogged, the Summer 17 release of Salesforce allows you to turn the locker service on or off based on the API version of a component. This is clearly an awesome feature, but there is a gotcha which I came across this week while working with the Lightning Testing Service Pilot.

I have a JavaScript library containing functionality that needs to work as a singleton (so a single instance that all Lightning Components have access to). This library is a static resource that is loaded via the <ltng:require /> standard component.

One window to rule them all?

In JavaScript world there is a single Window object, representing the browser’s window, which all global objects, functions and variables are attached to. When my JavaScript library is loaded, an immediately invoked function expression executes and attaches an my library object to the Window.

In Lightning Components world, with the locker service enabled, the single Window object changes somewhat. Instead of a Window, your components see a SecureWindow object which is shared among all components in the same namespace. This SecureWindow is isolated from the real Window for security reasons. In practice, this means that if you mix locker and non-locker Lightning Components on the same page, there are two different window concepts which know nothing about each other.

Example code 

The example here is a lightning application that attaches a String to the Window object when it initialises and includes two components that each attempt to access this variable from the window, one at ApI 40 and one at Api 39. The app also attempts to access the variable, just to show that it is correctly attached.

App

<aura:application >
    <aura:handler name="init" value="{!this}" action="{!c.doInit}" />
    <button onclick="{!c.showFromWindow}">Show from in App</button> <br/>
    <c:NonLockerWindow /> <br/>
    <c:LockerWindow /> <br/>
</aura:application>

Controller

({
	doInit : function(component, event, helper) {
		window.testValue="From the Window";
	},
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);
	}
})

Locker Component

<aura:component >
	<button onclick="{!c.showFromWindow}">Show from Locker Component</button>
</aura:component>

Controller

({
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);
	}
})

Non Locker Component

<aura:component >
	<button onclick="{!c.showFromWindow}">Show from non-Locker Window</button>
</aura:component>

Controller

({
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);
	}
})

Executing the example

If I set the API version of the app to 40, this enables the locker service. Click the three buttons in turns shows the following alerts:

From App

Screen Shot 2017 06 10 at 06 31 24

As expected, the variable is defined.

Non-Locker Component

Screen Shot 2017 06 10 at 06 31 37

As the application is running with the locker service enabled, the variable is attached to a secure window. The non-locker service component cannot access this so the variable is undefined.

Locker Component


Screen Shot 2017 06 10 at 06 31 44

As this component is also executing the with the locker service enabled, it has access to the secure window for it’s namespace. As the namespace between the app and this component is the same, the variable is available.

Changing the app to API version 39 allows the non-locker component to access the variable from the regular JavaScript window, while the locker component doesn’t have access as the variable is not defined on the secure window.

So what?

This had a couple of effects on my code if I mix Api versions to that my contains a combination of locker and non-locker components:

  •  I can’t rely on the library being made available by the containing component or app. Thus I have to ensure that every component loads the static resource. This is best practice anyway, so not that big a deal
  • I don’t have a singleton library any more. While this might not sound like a big deal, given that I can load it into whatever window variant I have, it means that if I change something in that library from one of my components, it only affects the version attached to the window variant that my component currently have access. For example, if I set a flag to indicate debug mode is enabled, only those components with access to the specific window variant will pick this up. I suspect I’ll solve this by having two headless components that manage the singletons, one with API 40 and one with API < 40, and send an event to each of these to carry out the same action.

Related Posts

 

 

 

Sunday, 4 June 2017

Visualforce Page Metrics in Summer 17

Visualforce Page Metrics in Summer 17

Metrics

Introduction

The Summer 17 release of Salesforce introduces the concept of Visualforce Page Metrics via the SOAP API. This new feature allows you to analyse how many page views your Visualforce pages received on a particular day. This strikes me as really useful functionality - I create a lot of Visualforce pages (although more Lightning Component based these days), to allow users to manage multiple objects on a single page for example. After I’ve gone to the effort of building the page I’m always curious as to whether anyone is actually using it!

SOAP Only

A slight downside to this feature is that the information is only available via the SOAP API. The release notes give an example of using the Salesforce Workbench, but ideally I’d like a Visualforce page to display this information without leaving my Salesforce org. Luckily, as I’ve observed in previous blog posts, the Ajax Toolkit provides a JavaScript wrapper around the SOAP API that can be accessed from Visualforce. 

Sample Page

In my example page I’m grouping the information by date and listing the pages that were accessed in order of popularity. There’s not much information in the page as yet because I’m executing this from a sandbox, so the page may get unwieldy in a production environment and need some pagination or filter criteria.

Screen Shot 2017 06 04 at 06 25 29

Show me the code

Once the Ajax Toolkit is setup, the following query is executed to retrieve all metrics:

var result = sforce.connection.query(
   "SELECT ApexPageId,DailyPageViewCount,Id,MetricsDate FROM VisualforceAccessMetrics " +
   "ORDER BY MetricsDate desc, DailyPageViewCount desc");

The results of the query can then be turned into an iterator and the records extracted - I’m storing these as an array in an object with a property per date:

var it = new sforce.QueryResultIterator(result);
        
while(it.hasNext()) {
    var record = it.next();
            
    var dEle=metricByDate[record.MetricsDate];
    if (!dEle) {
        dEle=[];
        metricByDate[record.MetricsDate]=dEle;
    }
            
    // add to the metrics organised by date
    dEle.push(record);
}
        

 This allows me to display the metrics by Visualforce page id, but that isn’t overly useful, so I query the Visualforce pages from the system and store them in an object with a property per id - analogous to an Apex map:

result = sforce.connection.query(
    "Select Id, Name from ApexPage order by Name desc");
        
it = new sforce.QueryResultIterator(result);
        
var pageNamesById={};
        
while(it.hasNext()) {
    var record = it.next();
    pageNamesById[record.Id]=record.Name;
}

 I can then iterate the date properties and output details of the Visualforce page metrics for those dates:

for (var dt in metricByDate) {
    if (metricByDate.hasOwnProperty(dt)) {
        var recs=metricByDate[dt];
        output+='<tr><th colspan="3" style="text-align:center; font-size:1.2em;">' + dt + '</td></tr>';
	output+='<tr><th>Page</th><th>Date</th><th>Views</th></tr>';
	for (var idx=0, len=recs.length; idx<len; idx++) {
            var rec=recs[idx];
            var name=pageNamesById[rec.ApexPageId];
            output+='<tr><td>' + name + '</td>';
            output+='<td>' + rec.MetricsDate + '</td>';
            output+='<td>' + rec.DailyPageViewCount + '</td></tr>';
        }
    }
}

You can see the full code at this gist.

Related Posts

Saturday, 6 May 2017

Selenium and SalesforceDX Scratch Orgs

Selenium and SalesforceDX Scratch Orgs

Screen Shot 2017 05 06 at 13 57 48

Introduction

Like a lot of other Salesforce developers I use Selenium from time to time to automatically test my Visualforce pages and Lightning Components. Now that I’m on the SalesforceDX pilot, I need to be able to use Selenium with scratch orgs. This presents a slight challenge, in that Selenium needs to open the browser and login to the scratch org rather than the sfdx cli. Wade Wegner’s post on using scratch orgs with the Salesforce Workbench detailed how to get set a scratch org password so I started down this route before realising that there’s a simpler way, based on the sfdx force:org:open command. Executing this produces the output:

Access org <org_id> as user <username> with the following URL: https://energy-agility-9184-dev-ed.cs70.my.salesforce.com/secur/frontdoor.jsp?sid=<really long sid>

so I can use the same mechanism once I have the URL and sid for my scratch org which, as Wade’s post pointed out, I can get by executing sfdx force:org:describe. Even better, I can get this information in JSON format, which means I can easily process it in a Node script. Selenium also has a Node web driver so the whole thing comes together nicely.

In the rest of this post I’ll show how to create a Node script that gets the org details programmatically, opens a Chrome browser, opens a page that executes some JavaScript tests and figures out whether the tests succeeded or not. The instructions are for MacOS as that is my platform of choice.

Setting Up

In order to control the chrome browser from Selenium you need to download the Chrome Webdriver and add it to your system PATH - I have a generic tools directory that is already on my path so I downloaded it there. 

Next, clone the github repository by executing:

 git clone https://github.com/keirbowden/dxselenium.git

The Salesforce application is based on my Unit Testing Lightning Components with Jasmine talk from Dreamforce 16. You probably want to update the config/workspace-scratch-def.json file to change the company detail etc to your own information. 

Setting up the Scratch Org

Change to the cloned repo directory:

cd dxselenium

Then login to your dev hub:

sfdx force:auth:web:login --setdefaultdevhubusername --setalias my-hub-org

 and create a scratch org  - to make life easier I set the —setdefaultusername parameter so I don’t have to specify the details on future commands.

sfdx force:org:create --definitionfile config/workspace-scratch-def.json --setalias LCUT —setdefaultusername

Finally for this section, push the source:

sfdx force:source:push

Setting up Node

(Note that I’m assuming here that you have node installed).

Change to the node client directory:

cd node

Get the dependencies:

npm install

Executing the Test

Everything is now good to go, so execute the Node script that carries out the unit tests:

node ltug.js

You should see a chrome browser starting and the Node script producing the following output:

Getting org details
Logging in
Opening the unit test page
Running tests
Checking results
Status = Success

The script exits after 10 seconds to give you a chance to look at the page contents if you are so inclined. 

The Chrome browser output can be viewed on youtube:

Show me the Node

The Node script is shown below:

var child_process=require('child_process');

var webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;

var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();

var exitStatus=1;

console.log('Getting org details');
var orgDetail=JSON.parse(child_process.execFileSync('sfdx', ['force:org:describe', '--json']));
var instance=orgDetail.instanceUrl;
var token=orgDetail.accessToken;
console.log('Logging in');
driver.get(instance + '/secur/frontdoor.jsp?sid=' + token);
driver.sleep(10000).then(_ => console.log('Opening the unit test page'));
driver.navigate().to(instance + '/c/JobsTestApp.app');
driver.sleep(2000).then(_ => console.log('Running tests'));
driver.findElement(By.id('slds-btn')).click();
driver.sleep(2000).then(_ => console.log('Checking results'));

driver.findElement(By.id("status")).getText().then(function(text) {
    console.log('Status = ' + text);
    if (text==='Success') {
    exitStatus=0;
    }
});
driver.sleep(10000);
driver.quit().then(_ => process.exit(exitStatus));

After the various dependencies are setup, the org details are retrieve via the sfdc force:org:describe command:

var orgDetail=JSON.parse(child_process.execFileSync('sfdx', ['force:org:describe', '--json']));

From the deserialised orgDetail object, the instance URL and access code are extracted:

var instance=orgDetail.instanceUrl;
var token=orgDetail.accessToken;

And then the testing can begin. Note that the Selenium web driver is promise based, but also provides a promise manager which handles everything when using the Selenium API. In the snippet below the driver.sleep won’t execute until the promise returned by the driver.get function has succeeded.

driver.get(instance + '/secur/frontdoor.jsp?sid=' + token);
driver.sleep(10000).then(_ => console.log('Opening the unit test page'));

However, when using non-Selenium functions, such as logging some information to the console, the promise manager isn’t involved so I need to manage this myself by supplying a function to be executed when the promise succeeds, via the then() method.

Note that I’ve added a number of sleeps as I’m testing this on my home internet which is pretty slow over the weekends.

The script then opens my test page and clicks the button to run the tests:

driver.navigate().to(instance + '/c/JobsTestApp.app');
driver.sleep(2000).then(_ => console.log('Running tests'));
driver.findElement(By.id('slds-btn')).click();

Finally, it locates the element with the id of status and checks that the inner text contains the word ‘Success’ - note that again I have to manage the promise result as I’m outside the Selenium API.

driver.findElement(By.id("status")).getText().then(function(text) {
    console.log('Status = ' + text);
    if (text==='Success') {
    exitStatus=0;
    }
});

Related Posts

 

Saturday, 29 April 2017

Locker Service in Summer 17

Locker Service in Summer 17

Introduction

The Summer 17 release of Salesforce sees the activation of the Lightning Components Locker Service critical update - something that I’d say has been anticipated and feared in equal measure since it was announced. If you’ve been hiding under a rock for the last couple of years, the Locker Service (among other things) adds a security layer to your Lightning Components JavaScript, isolating components by namespace to ensure that your Evil Co-worker can’t write components can’t go tinkering with the standard Salesforce components for nefarious purposes.

The Breaking Changes Problem

The problem with enforcing the Locker Service is that it breaks code that was written before the Locker Service was known about.  In many cases this was work that a customer paid a third party to carry out who has long since departed. Breaking that functionality through a change to the platform can be contentious, with third parties expecting to be paid to fix problems and customers expecting them to be fixed for nothing as key functionality no longer works. Now there were warnings in the docs from the get-go, basically saying this works now but might not work in the future, and I have no sympathy for anyone that wrote code that flew in the face of this warning. However, there are other considerations - some third party libraries break for example, and that really isn’t something that could be defended against back in the day. Changes to the platform that break existing code that was written with best endeavours just isn’t cool.

The Breaking Changes Solution

The Summer 17 release notes preview contain an entry that will be music to the ears of any customer or consultant in this position - the Locker Service will be enforced based on API version. Anything on Summer 17 or later (API 40) will be subject to the locker service, while anything earlier (API 39 or lower) will not. You can think of this a bit like the ‘without sharing’ keyword - apply that to an Apex class and it bypasses sharing settings, and apply API 39 to any Lightning Component and it will bypass the locker service. From the horse’s mouth (the release notes preview) :

When a component is set to at least API version 40.0, which is the version for Summer ’17, LockerService is enabled. LockerService is disabled for any component created before Summer ’17 because these components have an API version less than 40.0. To disable LockerService for a component, set its API version to 39.0 or lower.

I think this solution is pretty cool - it allows existing code to continue working while enforcing appropriate security on new code - whoever at Salesforce managed to persuade the security team to go this route, kudos to you!

Note that this is from the preview release notes so the situation could change, although let’s hope it doesn’t!

Use These Powers for Good

This new functionality shouldn’t be taken as an invitation to allow your Lightning Components to blaze a trail of destruction on every page that is unfortunate enough to include them. It should only be used as a last resort going forward. If for no other reason than it ties your component to an ageing API version so you’ll miss out on all the cool stuff that comes in the future.

Related Posts

 

Saturday, 22 April 2017

Salesforce Health Check Custom Baseline

Salesforce Health Check Custom Baseline

Introduction

The Salesforce Health Check has been around for a year or so now, debuting in the Spring 16 release of Salesforce (and bearing a striking resemblance to an app exchange listing with the same name).  The Salesforce Help topic gives chapter and verse on this so I’m not going to spend any time on the basic functionality, except to say that it’s a great tool for allowing you to see at a glance how your Salesforce org shapes up security-wise. There has been one caveat though, the baseline it is compared against is set by Salesforce not you, which means that if your security standard differs from the one true path you’ll see warnings and errors. As anyone who has accepted a unit test failure for more than one build knows, as soon as people expect errors they stop counting how many there are. Thus you may start out accepting a single warning, before you know it you have a number of potential security problems which are being ignored because “that page always shows errors”.

Custom Baselines

Spring 17 introduced the beta of custom baselines - this allows you to deviate from the Salesforce standard and supply your own baseline which reflects your security requirements. From now on if your Health Check page shows an error or exception, that means you have a real security issue and need to deal with it quickly.

While you could create a custom baseline from scratch, the easiest way is to export the standard baseline and amend it. Navigate to Setup -> Security Controls -> Health Check and click the gear icon, then ‘Export XML’ from the resulting context menu:

 

Screen Shot 2017 04 22 at 15 27 33

 

This downloads the baseline to a file named ‘baseline.xml’ (or baseline (1,2,3,etc).xml if you keep downloading it to the same place on a mac!), which you can then open in your favourite editor - I like Atom for XML files. Again, the Salesforce Help does a great job of explaining the format of the XML file so I’m not going to cover this. A couple of things to bear in mind:

  • You must change the Name and DeveloperName of the Baseline element, otherwise you’ll be trying to overwrite the standard, which you can’t do.
  • When you import the file, do it via the Lightning Experience. If you try this in class and you get an error you get no information that an error has occurred. According to the help “If your import fails, you receive a detailed message in Lightning Experience to help you resolve the problem”, which is pretty big talk when the actual message is Screen Shot 2017 04 22 at 16 03 16

Changing the Baseline

One area where my dev org is considered substandard is the password expiration time. I have my passwords set up never to expire, as forcing users to change their passwords regularly often results in them choosing predictable passwords that are easier to break. The Salesforce health check standard generates a Medium Risk alert if the value is over 90 days and a High Risk alert if the value is over 180 days.

Screen Shot 2017 04 22 at 15 40 22

Here’s the section of the file that configures this:

Screen Shot 2017 04 22 at 15 41 05

If I change the standard value to the numeric equivalent of Never Expires, 2147483647.0, and the warning to one higher:

Screen Shot 2017 04 22 at 15 57 54

and import the updated XML file using the context menu shown above, I can then switch my Health Check to the custom baseline and my password expiration is now at a satisfactory level:

Screen Shot 2017 04 22 at 16 05 10

I am not a security consultant

Notwithstanding the fact that forcing users to change their passwords regularly is out of favour in some places, you should not take this post as my advising you about your password policies in any shape or form. If you base your security settings on things that you read in random blog posts then best of luck to you - I did it in a dev org to show the functionality as there’s nothing that I really care about in there.

I’d expect the majority of custom baselines to be making the security standard more restrictive, in regulated industries for example, but what you should set up is a baseline that aligns with your corporate security policies.

Here comes the wish list

Anyone familiar with my blogs or Medium stories knows that I usually have a wish list around Salesforce functionality, so if any product managers are reading this, here’s what I’d like to see:

  • A way to email out the health check, run against a custom baseline, on a schedule. Security and compliance departments can receive this first thing in the morning and spend the day focusing on other systems.
  • Notifications when the health check result changes - if my Evil Co-Worker blags admin rights and changes the configuration to allow previous passwords to be re-used, I want to know about it. (Ideally I’d receive an automated report at the end of every day detailing everything the Evil Co-Worker has done, but that might be asking too much).
  • A way to snapshot the health check output regularly, so that I can see if an org is trending towards a more or less baseline compliant security setup. 
  • Custom entries - for example, I can easily spin through the ApexClass sobjects and figure out how many aren’t using ‘with sharing’. Security isn’t just about configuration, it’s also about code!

Related Posts