Sunday, 20 August 2017

Lightning Testing Service Part 2 - Custom Jasmine Reporter

Lightning Testing Service Part 2 - Custom Jasmine Reporter


(Note: This blog applies to the Jasmine variant of the Lightning Testing Service)


One of the cool things about Jasmine is how easy it is to add your own reporter. Compared to some of the other JavaScript testing frameworks I’ve used in the past, it’s entirely straightforward. Essentially you are implementing an interface, although as JavaScript doesn’t have interfaces it’s very much based on what you should rather than must implement. A Jasmine Reporter is a JavaScript object with the appropriate functions for the framework to call when something interesting happens. Even cooler is the fact that the framework first checks that you have provided the function before it is invoked, so if you don’t care about specific events, you just leave out the functions to handle those events and you are all good.


Some or all of the following functions are required to handle the various events that occur as test are executed - basically things commencing and completing:

  • jasmineStarted/jasmineDone - called before any specs (tests) execute/once all tests have completed
  • suiteStarted/suiteDone - called before a specific suite (group of tests) execute/once they have completed
  • specStarted/specDone - called before a specific test executes/once it has completed

Once you have your object with the desired functions, it must be registered before any tests are queued:


and that’s all there is to it.


Below is an example lightning component that creates a simple reporter to capture the success/failure of each suite/spec and log information to the console. Note that this relies on the Jasmine artefacts installed by the latest version of the Lightning Testing Service unmanaged package. The component is named KABJasmineReporter:


<aura:component extensible="true">
    <ltng:require scripts="{!join(',',
				$Resource.lts_jasmine + '/lib/jasmine-2.6.1/jasmine.js',
				$Resource.lts_jasmine + '/lib/jasmine-2.6.1/jasmine-html.js',
                  afterScriptsLoaded="{!c.doInit}" />


	doInit : function(component, event, helper) {
		helper.initialiseJasmineReporter(component, event);


    myReporter : {
        content : '',
        suites : [],
        output : function(message) {
        clear: function() {
        getCurrentSuite: function() {
            return this.suites[this.suites.length-1];
        getCurrentSpec : function() {
            return this.getCurrentSuite().specs[this.getCurrentSuite().specs.length - 1];
        jasmineStarted: function(suiteInfo) {
            this.output('Running suite with ' + suiteInfo.totalSpecsDefined + ' specs');
        suiteStarted: function(result) {
            this.output('Suite started: ' + result.description + ' whose full description is: ' + result.fullName);
            this.suites.push({name : result.fullName,
                              specs : []});
        specStarted: function(result) {
            this.output('Spec started: ' + result.description + ' whose full description is: ' + result.fullName);
            this.getCurrentSuite().specs.push({name: result.description,
                                               failures: [],
                                               failureCount: 0,
                                               successes: 0});
        specDone: function(result) {
            this.output('Spec: ' + result.description + ' complete status was ' + result.status);
            this.output(result.failedExpectations.length + ' failures');
            for(var i = 0; i < result.failedExpectations.length; i++) {
                var failure=result.failedExpectations[i];
                this.output('Failure: ' + failure.message);
                this.getCurrentSpec().failures.push({message: failure.message,
                                                     stack : failure.stack});
            this.output(result.passedExpectations.length + ' successes');
        suiteDone: function(result) {
            this.output('Suite: ' + result.description + ' was ' + result.status);
            for(var i = 0; i < result.failedExpectations.length; i++) {
                this.output('AfterAll ' + result.failedExpectations[i].message);
        jasmineDone: function() {
	        this.output('Finished tests');
    	    this.output('Successes : ' + this.totalSuccesses);
	        this.output('Failures : ' + this.totalFailures);
	        this.output('Details : ' + JSON.stringify(this.suites, null, 4));
    initialiseJasmineReporter : function(component, event) {
        console.log('Initialising jasmine reporter');
        var self=this;
        var env = jasmine.getEnv();

A couple of tweaks to the jasmineTests app to include my reporter (and to limit to a couple of tests, otherwise there’s a lot of information in the console log):


<aura:application >
    <c:KAB_JasmineReporter />
    <c:lts_jasmineRunner testFiles="{!join(',',
    )}" />

Executing the app produces the following console output:

Initialising jasmine reporter
Running suite with 2 specs
Suite started: A simple passing test whose full description is: A simple passing test
Spec started: verifies that true is always true whose full description is: A simple passing test verifies that true is always true
Spec: verifies that true is always true complete status was passed
0 failures
1 successes
Suite: A simple passing test was finished
Suite started: A simple failing test whose full description is: A simple failing test
Spec started: fails when false does not equal true whose full description is: A simple failing test fails when false does not equal true
Spec: fails when false does not equal true complete status was pending
0 failures
0 successes
Suite: A simple failing test was finished
Finished tests
Successes : 1
Failures : 0
Details : [
        "name": "A simple passing test",
        "specs": [
                "name": "verifies that true is always true",
                "failures": [],
                "failureCount": 0,
                "successes": 1
        "name": "A simple failing test",
        "specs": [
                "name": "fails when false does not equal true",
                "failures": [],
                "failureCount": 0,
                "successes": 0


While this has been a simple example, there’s a lot more that can be done with custom reporters, such as posting notifications with the tests results, which I plan to explore in later posts. 

Related Posts


Saturday, 5 August 2017

Lightning experience utility bar - add an app for that

Lightning experience utility bar - add an app for that


This week I’ve been working on adding utility bar functionality to our BrightMedia appcelerator. Typically when I build functionality of this nature I’ll start off with with the component markup using hardcoded values to get the basic styling right, then make it dynamic with the data coming from the JavaScript controller/helper, before finally wiring it up to an Apex controller that extract data from the Salesforce database, either through sobjects or custom settings.

For the purposes of this blog I’m going to say that it was presenting a list of Trailmixes, the new feature in Trailhead (it wasn’t, but this is a much simpler example and taps into the zeitgeist).

First incarnation

The version of the component that simply displayed a Trailmix with a button to open it:

<aura:component implements="flexipage:availableForAllPageTypes"
    <div class="slds-p-around--x-small slds-border--bottom slds-theme--shade">
        <div class="slds-grid slds-grid--align-spread slds-grid--vertical-align-center">
                Blog Trailmix
                <lightning:buttonIcon iconName="utility:open"
                                      alternativeText="Open" variant="border-filled"/>

and, not surprisingly, this worked fine:

Screen Shot 2017 08 05 at 16 54 45

Second incarnation

The second version initialised a list of Trailmixes, still containing a single element, in the JavaScript controller, which the component then iterated. First the component:

<aura:component implements="flexipage:availableForAllPageTypes"
    <aura:attribute name="mixes" type="Object[]" />
    <aura:handler name="init" value="{!this}" action="{!c.doInit}"/>
    <aura:iteration items="{!v.mixes}" var="mix">
        <div class="slds-p-around--x-small slds-border--bottom slds-theme--shade">
            <div class="slds-grid slds-grid--align-spread slds-grid--vertical-align-center">
                <div data-record="{!mix.key}">
                    <lightning:buttonIcon onclick="{!c.OpenMix}"
                                          alternativeText="Open" variant="border-filled"/>

Next, the controller

	doInit : function(component, event, helper) {
        var mixes[];
                    name:"Blog Trailmix",
		component.set('v.mixes', mixes);

Here things started to go awry - clicking the utility bar item to open it did nothing and a few seconds later a toast message would appear, with a message along the lines of “we’re still working on your request”, but nothing further. Changing the component in the utility bar configuration to initialise in the background got a little further, but still no content, instead a perpetual spinner:

Screen Shot 2017 08 05 at 17 03 23


Viewing the JavaScript console showed nothing out of the ordinary. I’d been having a few problems with my internet connection soI assumed it was either that or Salesforce having an issue, and as it was fairly late at night I decided to leave it until the morning to see if things were resolved. No such luck.

Wrap it and app it

I then did what I usually do when I’m having issues with a lightning component - create an app with just the component in and see what happens then. The app couldn’t be simpler:

<aura:application >
    <c:Trailmixes />

Previewing this showed that there was an error that was somehow being swallowed:

Screen Shot 2017 08 05 at 17 08 10

which I’m sure the eagle-eyed reader has spotted - the declaration of my mixes variable that eventually gets stored as an attribute was missing an ‘=‘ character:

var mixes[];

After correcting this and refreshing the page a couple of times, and I was back on track with my Trailmix component. 

In conclusion

Always try any troublesome component in an app of it’s own - while in most cases you won’t have the utility bar swallowing errors, it’s way easier to debug a component when there aren’t 50 others on the same page firing events and changing attributes. Also, sometimes a syntax type error was shown in the JavaScript console and sometimes not, so look there first.

Related posts


Friday, 21 July 2017

Not Hotdog - Salesforce Einstein Edition

Not Hotdog - Salesforce Einstein Edition

Screen Shot 2017 07 21 at 10 22 16


Anyone who is a fan of HBO’s Silicon Valley show will be familiar with Not Hotdog, Jian Yang’s app that determines whether an item of food is a hotdog or not. In a wonderful example of fiction made fact, the show have released iOS and Android applications in real life - you can read about how they did this on their medium post. Around this time I was working through the Build a Cat Rescue App that Recognises Cat Breeds Trailhead Project, which uses Einstein Vision to determine the breed of cat from an image, and it struck me that I could use this technology to develop a Salesforce version of Not Hotdog.

Building blocks

Trailhead Playground

As I’d already set up Einstein Vision and connected it to my Trailhead Playground, I decided to build on top of that rather than create a new developer edition. 

Einstein Vision Apex Wrappers

A key aspect of the project is the salesforce-einstein-vision-apex repository - Apex wrappers for Einstein Vision produced by Developer Evangelist RenĂ© Winkelmeyer. The project somewhat glosses over these, but they provide a really nice mechanism to create and train an Einstein Vision dataset and then use that for predictions. It takes away pretty much all the heavy lifting, so thanks RenĂ©. 

Public Access Community

Let’s be honest, there was no way I was going to build a full-fledged app for this. I did consider building an unmanaged package and including the images I used to train the dataset, but it seemed a bit crazy to have everyone creating and training their own dataset for the same purpose. Given my reach in the Salesforce community this could literally result in tens of duplicate datasets :)

I therefore decided to expose this as an unauthenticated page on a Salesforce community. I had the option of using a Site but I also wanted to play around with unauthenticated access to Lightning Components and the docs say to use a community. 

Putting it all together

I had to make one change to the Einstein Vision Apex Wrappers - I couldn’t get the guest user to be able to access the Salesforce File containing the Einstein Vision key, so I just hardcoded it into the EinsteinVision_PredictionService class. Evil I know, but this is hardly going into production any time soon.

I then created a dataset named ‘nothotdog’ and trained it via a zip file of images. The zip file is organised into a directory per label - in my case there were two directories - ‘Hot Dog’ and ‘Not Hot Dog’.

I then added the following method to the EinsteinVision_Admin class, to match a supplied image in base64 form against the dataset.

public static String GetHotDogPredictionKAB(String base64) {
    String hdLabel='Unable to match hotdog';
    Blob fileBlob = EncodingUtil.base64Decode(base64);
    EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
    EinsteinVision_Dataset[] datasets = service.getDatasets();
    for (EinsteinVision_Dataset dataset : datasets) {
        if (dataset.Name.equals('nothotdog')) {
            EinsteinVision_Model[] models = service.getModels(dataset);
            EinsteinVision_Model model = models.get(0);
            EinsteinVision_PredictionResult result = service.predictBlob(model.modelId, fileBlob, '');
            EinsteinVision_Probability probability = result.probabilities.get(0);
    return hdLabel;

Next I needed a lightning component that would allow me to upload a file and send it back to the server, to execute the method from above. However, I also wanted this to work from a mobile device as file inputs on the latest Android and iOS allow you to take a picture and use that. The problem with this is that the image files are pretty huge, so I also needed a way to scale them down before submitting them. Luckily this can be achieved by drawing the image to an HTML5 canvas element scaled to the appropriate size.

Unfortunately this threw up another problem, in that when the Locker Service is enabled you don’t have an image element that can be drawn on a canvas, you have a secure element instead. There is no workaround to this so I had to drop the API version of my component down to 39. I guess one day the Locker Service will be finished and everything will work fine.

There’s a fair bit of code in the NotHotdog Lightning Component bundle so rather than making this the world’s longest post you can view it at this gist.

Next, I needed an app to surface the bundle through a Visualforce page. These are pretty simple, the only change to the usual way this is done is to implement the interface ltng:allowGuestAccess:

<aura:application access="GLOBAL" extends="ltng:outApp"

    <aura:dependency resource="c:NotHotDog"/>

Finally, the Visualforce page that is accessible via the community:

<apex:page docType="html-5.0" sidebar="false" showHeader="false" standardStylesheets="false"
           cache="false" applyHtmlTag="false">
            <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no;" />
            <apex:includeLightning />
                           function() {
                                   { },
                                   function(cmp) {
            <div id="lightning" />

Yes we’ve got a video

Here’s a video of the app doing it’s thing - first recognising a hotdog and then correctly determining that the BrightGen head office building is not a hotdog. What a time to be alive.



It’s not bullet proof

The HBO team trained their app with hundreds of thousands of images, I just did a couple of hundred because this isn’t my day job! It’s pretty good on obvious hotdog images, but not so much when you take photos. Your mileage may vary. Also, take photos on a phone in landscape mode as most of them rotate it.

Try it out yourself

If you’d like to enjoy the majesty of this application on your own machine:

Static qr code without logo 3


if you’re in London on Aug 2nd 2017, we’ll have a talk on Einstein Vision at our developer meetup. Sign up at :

Related Information



Saturday, 1 July 2017

Lightning Testing Service Part 1

Lightning Testing Service Part 1



Back at Dreamforce 16 I gave a talk on Unit Testing Lightning Components using Jasmine. During that talk I said that I hoped that Salesforce would come up with their own testing framework for Lightning Components. I wasn’t disappointed as the Lightning Testing Service (LTS) went into Pilot at the end of May and I was lucky enough to be invited in. It’s been a slight challenge to find enough time to try out LTS while still taking SFDX through it’s paces and making sure I give full attention to my day job, but it’s worth the effort.


The LTS is available on github for anyone to try out - I’m not sure how much support you’ll get if you aren’t on the pilot, but I’ve found it works as-is. Now that SFDX is in open beta you can easily combine the two - I’ve just done exactly that and it took around 30 minutes including signing up for trial orgs, downloading the latest SFDX CLI etc. 

The Lightning Testing Service is agnostic about the JavaScript testing framework that you use, but all the samples are based on Jasmine. Having used a few of them I think this is a good idea as Jasmine has a great set of features and most importantly an equivalent for most of the features of the Apex testing framework. The one area that Jasmine is lacking, I think, is the documentation. There are plenty of examples but not that much in the way of explanation as to how the examples actually work. While you can dig into the code as it’s all open source (, if you are reasonably new to JavaScript and/or front end unit testing it’s a struggle. I found Jasmine JavaScript Testing by Paulo Ragonha to be an excellent introduction. While the latter chapters of the book focus on React, the first 6 chapters cover generic testing with Jasmine and explain the concepts and features really well (I have no affiliation with the book or author).

Apex eye view

Jasmine concepts map to Apex test concepts as follows: 

Suite Suite
describe('initialise', function () {...})
Test Method Spec
it('gets the accounts', function () {...})
Assert Expectation
Setup beforeEach/All

Jasmine also has a couple of concepts that Apex doesn’t have:

  • afterEach/All - teardown code to be executed after a spec (afterEach) or the last spec (afterAll). You don’t have this in Apex as test transactions are rolled back so there is nothing to teardown. 
  • Spies - these allow you to stub out a function, track the number of times it has been called, the parameters it was called with. These are really useful when you don’t have transactions that automatically rollback as you need to make sure you stub out anything that might commit to the Salesforce database.

Running Tests

One of the challenges when unit testing on the front end is figuring out how to execute the tests. The route I went was to make it the responsibility of a component to notify a controlling component that it had unit tests and to schedule those tests. There were a couple of downsides to this:

  1. The test code was tightly coupled with the production code so would be deployed to production
  2. The controlling component had to know how many components had tests so that it could wait until the appropriate number had notified it that their tests were queued.

When I presented this I made the point that there are a number of ways of doing this, and the LTS takes a somewhat different approach.

There still has to be a component that is responsible for loading Jasmine, setting up the reporter(s) and managing the tests, and LTS examples have one of these. This component also schedules the tests by loading one or more static resource that contains a collection of Jasmine test suites. As these resources are loaded by the <ltng:require /> tag, the JavaScript code is automatically executed by the browser and schedules the test with the Jasmine runner.

This approach has the upside of decoupling the test code from the actual component, allowing you full control over whether you want to deploy them to production, and removing the requirement for the component executing the tests to know anything about how many tests are being executed. It also allows you to easily group tests into functional areas.

The downside is that it decouples the test code from the actual component, which means that if you want to stub out a method it has to be exposed as part of the components API via an <aura:method /> attribute. I’m not mad keen on this as it feels like I’m exposing the internals for pure testing purposes and I can’t stop my Evil Co-Worker from creating components that use these methods for nefarious purposes. That said, I’m pretty sure it would be possible to leave tests that rely on access to a components internals inside the component itself by dynamically creating the component once the Jasmine framework is all set up. This is something I hope to cover in a later blog post assuming I can get it working!

SFDX Integration

This is probably the coolest aspect of the LTS. The SFDC CLI with Force plugin 40 includes a new command to execute lightning component unit tests :

sfdx force:lightning:test:run

This creates a browser session and navigates to a lightning application (default, which executes the tests. The CLI is then able to get at the results of the tests and output them. I’m not sure how this last piece works, but it feels like something you’d need to find a way to replicate if using another JavaScript testing framework.  What it means, however, is that you can include these unit tests in a continuous integration process, thus ensuring that your components work as expected.

That’s it for part 1 - there’s a lot of information at the github repo and there’s no point in me replicating that just to get some eyes on my blog.

Related Posts


Saturday, 10 June 2017

Locker Service, Lightning Components and JavaScript Libraries

Locker Service, Lightning Components and JavaScript Libraries



As I’ve previously blogged, the Summer 17 release of Salesforce allows you to turn the locker service on or off based on the API version of a component. This is clearly an awesome feature, but there is a gotcha which I came across this week while working with the Lightning Testing Service Pilot.

I have a JavaScript library containing functionality that needs to work as a singleton (so a single instance that all Lightning Components have access to). This library is a static resource that is loaded via the <ltng:require /> standard component.

One window to rule them all?

In JavaScript world there is a single Window object, representing the browser’s window, which all global objects, functions and variables are attached to. When my JavaScript library is loaded, an immediately invoked function expression executes and attaches an my library object to the Window.

In Lightning Components world, with the locker service enabled, the single Window object changes somewhat. Instead of a Window, your components see a SecureWindow object which is shared among all components in the same namespace. This SecureWindow is isolated from the real Window for security reasons. In practice, this means that if you mix locker and non-locker Lightning Components on the same page, there are two different window concepts which know nothing about each other.

Example code 

The example here is a lightning application that attaches a String to the Window object when it initialises and includes two components that each attempt to access this variable from the window, one at ApI 40 and one at Api 39. The app also attempts to access the variable, just to show that it is correctly attached.


<aura:application >
    <aura:handler name="init" value="{!this}" action="{!c.doInit}" />
    <button onclick="{!c.showFromWindow}">Show from in App</button> <br/>
    <c:NonLockerWindow /> <br/>
    <c:LockerWindow /> <br/>


	doInit : function(component, event, helper) {
		window.testValue="From the Window";
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);

Locker Component

<aura:component >
	<button onclick="{!c.showFromWindow}">Show from Locker Component</button>


	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);

Non Locker Component

<aura:component >
	<button onclick="{!c.showFromWindow}">Show from non-Locker Window</button>


	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);

Executing the example

If I set the API version of the app to 40, this enables the locker service. Click the three buttons in turns shows the following alerts:

From App

Screen Shot 2017 06 10 at 06 31 24

As expected, the variable is defined.

Non-Locker Component

Screen Shot 2017 06 10 at 06 31 37

As the application is running with the locker service enabled, the variable is attached to a secure window. The non-locker service component cannot access this so the variable is undefined.

Locker Component

Screen Shot 2017 06 10 at 06 31 44

As this component is also executing the with the locker service enabled, it has access to the secure window for it’s namespace. As the namespace between the app and this component is the same, the variable is available.

Changing the app to API version 39 allows the non-locker component to access the variable from the regular JavaScript window, while the locker component doesn’t have access as the variable is not defined on the secure window.

So what?

This had a couple of effects on my code if I mix Api versions to that my contains a combination of locker and non-locker components:

  •  I can’t rely on the library being made available by the containing component or app. Thus I have to ensure that every component loads the static resource. This is best practice anyway, so not that big a deal
  • I don’t have a singleton library any more. While this might not sound like a big deal, given that I can load it into whatever window variant I have, it means that if I change something in that library from one of my components, it only affects the version attached to the window variant that my component currently have access. For example, if I set a flag to indicate debug mode is enabled, only those components with access to the specific window variant will pick this up. I suspect I’ll solve this by having two headless components that manage the singletons, one with API 40 and one with API < 40, and send an event to each of these to carry out the same action.

Related Posts




Sunday, 4 June 2017

Visualforce Page Metrics in Summer 17

Visualforce Page Metrics in Summer 17



The Summer 17 release of Salesforce introduces the concept of Visualforce Page Metrics via the SOAP API. This new feature allows you to analyse how many page views your Visualforce pages received on a particular day. This strikes me as really useful functionality - I create a lot of Visualforce pages (although more Lightning Component based these days), to allow users to manage multiple objects on a single page for example. After I’ve gone to the effort of building the page I’m always curious as to whether anyone is actually using it!


A slight downside to this feature is that the information is only available via the SOAP API. The release notes give an example of using the Salesforce Workbench, but ideally I’d like a Visualforce page to display this information without leaving my Salesforce org. Luckily, as I’ve observed in previous blog posts, the Ajax Toolkit provides a JavaScript wrapper around the SOAP API that can be accessed from Visualforce. 

Sample Page

In my example page I’m grouping the information by date and listing the pages that were accessed in order of popularity. There’s not much information in the page as yet because I’m executing this from a sandbox, so the page may get unwieldy in a production environment and need some pagination or filter criteria.

Screen Shot 2017 06 04 at 06 25 29

Show me the code

Once the Ajax Toolkit is setup, the following query is executed to retrieve all metrics:

var result = sforce.connection.query(
   "SELECT ApexPageId,DailyPageViewCount,Id,MetricsDate FROM VisualforceAccessMetrics " +
   "ORDER BY MetricsDate desc, DailyPageViewCount desc");

The results of the query can then be turned into an iterator and the records extracted - I’m storing these as an array in an object with a property per date:

var it = new sforce.QueryResultIterator(result);
while(it.hasNext()) {
    var record =;
    var dEle=metricByDate[record.MetricsDate];
    if (!dEle) {
    // add to the metrics organised by date

 This allows me to display the metrics by Visualforce page id, but that isn’t overly useful, so I query the Visualforce pages from the system and store them in an object with a property per id - analogous to an Apex map:

result = sforce.connection.query(
    "Select Id, Name from ApexPage order by Name desc");
it = new sforce.QueryResultIterator(result);
var pageNamesById={};
while(it.hasNext()) {
    var record =;

 I can then iterate the date properties and output details of the Visualforce page metrics for those dates:

for (var dt in metricByDate) {
    if (metricByDate.hasOwnProperty(dt)) {
        var recs=metricByDate[dt];
        output+='<tr><th colspan="3" style="text-align:center; font-size:1.2em;">' + dt + '</td></tr>';
	for (var idx=0, len=recs.length; idx<len; idx++) {
            var rec=recs[idx];
            var name=pageNamesById[rec.ApexPageId];
            output+='<tr><td>' + name + '</td>';
            output+='<td>' + rec.MetricsDate + '</td>';
            output+='<td>' + rec.DailyPageViewCount + '</td></tr>';

You can see the full code at this gist.

Related Posts

Saturday, 6 May 2017

Selenium and SalesforceDX Scratch Orgs

Selenium and SalesforceDX Scratch Orgs

Screen Shot 2017 05 06 at 13 57 48


Like a lot of other Salesforce developers I use Selenium from time to time to automatically test my Visualforce pages and Lightning Components. Now that I’m on the SalesforceDX pilot, I need to be able to use Selenium with scratch orgs. This presents a slight challenge, in that Selenium needs to open the browser and login to the scratch org rather than the sfdx cli. Wade Wegner’s post on using scratch orgs with the Salesforce Workbench detailed how to get set a scratch org password so I started down this route before realising that there’s a simpler way, based on the sfdx force:org:open command. Executing this produces the output:

Access org <org_id> as user <username> with the following URL:<really long sid>

so I can use the same mechanism once I have the URL and sid for my scratch org which, as Wade’s post pointed out, I can get by executing sfdx force:org:describe. Even better, I can get this information in JSON format, which means I can easily process it in a Node script. Selenium also has a Node web driver so the whole thing comes together nicely.

In the rest of this post I’ll show how to create a Node script that gets the org details programmatically, opens a Chrome browser, opens a page that executes some JavaScript tests and figures out whether the tests succeeded or not. The instructions are for MacOS as that is my platform of choice.

Setting Up

In order to control the chrome browser from Selenium you need to download the Chrome Webdriver and add it to your system PATH - I have a generic tools directory that is already on my path so I downloaded it there. 

Next, clone the github repository by executing:

 git clone

The Salesforce application is based on my Unit Testing Lightning Components with Jasmine talk from Dreamforce 16. You probably want to update the config/workspace-scratch-def.json file to change the company detail etc to your own information. 

Setting up the Scratch Org

Change to the cloned repo directory:

cd dxselenium

Then login to your dev hub:

sfdx force:auth:web:login --setdefaultdevhubusername --setalias my-hub-org

 and create a scratch org  - to make life easier I set the —setdefaultusername parameter so I don’t have to specify the details on future commands.

sfdx force:org:create --definitionfile config/workspace-scratch-def.json --setalias LCUT —setdefaultusername

Finally for this section, push the source:

sfdx force:source:push

Setting up Node

(Note that I’m assuming here that you have node installed).

Change to the node client directory:

cd node

Get the dependencies:

npm install

Executing the Test

Everything is now good to go, so execute the Node script that carries out the unit tests:

node ltug.js

You should see a chrome browser starting and the Node script producing the following output:

Getting org details
Logging in
Opening the unit test page
Running tests
Checking results
Status = Success

The script exits after 10 seconds to give you a chance to look at the page contents if you are so inclined. 

The Chrome browser output can be viewed on youtube:

Show me the Node

The Node script is shown below:

var child_process=require('child_process');

var webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;

var driver = new webdriver.Builder()

var exitStatus=1;

console.log('Getting org details');
var orgDetail=JSON.parse(child_process.execFileSync('sfdx', ['force:org:describe', '--json']));
var instance=orgDetail.instanceUrl;
var token=orgDetail.accessToken;
console.log('Logging in');
driver.get(instance + '/secur/frontdoor.jsp?sid=' + token);
driver.sleep(10000).then(_ => console.log('Opening the unit test page'));
driver.navigate().to(instance + '/c/');
driver.sleep(2000).then(_ => console.log('Running tests'));
driver.sleep(2000).then(_ => console.log('Checking results'));

driver.findElement("status")).getText().then(function(text) {
    console.log('Status = ' + text);
    if (text==='Success') {
driver.quit().then(_ => process.exit(exitStatus));

After the various dependencies are setup, the org details are retrieve via the sfdc force:org:describe command:

var orgDetail=JSON.parse(child_process.execFileSync('sfdx', ['force:org:describe', '--json']));

From the deserialised orgDetail object, the instance URL and access code are extracted:

var instance=orgDetail.instanceUrl;
var token=orgDetail.accessToken;

And then the testing can begin. Note that the Selenium web driver is promise based, but also provides a promise manager which handles everything when using the Selenium API. In the snippet below the driver.sleep won’t execute until the promise returned by the driver.get function has succeeded.

driver.get(instance + '/secur/frontdoor.jsp?sid=' + token);
driver.sleep(10000).then(_ => console.log('Opening the unit test page'));

However, when using non-Selenium functions, such as logging some information to the console, the promise manager isn’t involved so I need to manage this myself by supplying a function to be executed when the promise succeeds, via the then() method.

Note that I’ve added a number of sleeps as I’m testing this on my home internet which is pretty slow over the weekends.

The script then opens my test page and clicks the button to run the tests:

driver.navigate().to(instance + '/c/');
driver.sleep(2000).then(_ => console.log('Running tests'));

Finally, it locates the element with the id of status and checks that the inner text contains the word ‘Success’ - note that again I have to manage the promise result as I’m outside the Selenium API.

driver.findElement("status")).getText().then(function(text) {
    console.log('Status = ' + text);
    if (text==='Success') {

Related Posts


Saturday, 29 April 2017

Locker Service in Summer 17

Locker Service in Summer 17


The Summer 17 release of Salesforce sees the activation of the Lightning Components Locker Service critical update - something that I’d say has been anticipated and feared in equal measure since it was announced. If you’ve been hiding under a rock for the last couple of years, the Locker Service (among other things) adds a security layer to your Lightning Components JavaScript, isolating components by namespace to ensure that your Evil Co-worker can’t write components can’t go tinkering with the standard Salesforce components for nefarious purposes.

The Breaking Changes Problem

The problem with enforcing the Locker Service is that it breaks code that was written before the Locker Service was known about.  In many cases this was work that a customer paid a third party to carry out who has long since departed. Breaking that functionality through a change to the platform can be contentious, with third parties expecting to be paid to fix problems and customers expecting them to be fixed for nothing as key functionality no longer works. Now there were warnings in the docs from the get-go, basically saying this works now but might not work in the future, and I have no sympathy for anyone that wrote code that flew in the face of this warning. However, there are other considerations - some third party libraries break for example, and that really isn’t something that could be defended against back in the day. Changes to the platform that break existing code that was written with best endeavours just isn’t cool.

The Breaking Changes Solution

The Summer 17 release notes preview contain an entry that will be music to the ears of any customer or consultant in this position - the Locker Service will be enforced based on API version. Anything on Summer 17 or later (API 40) will be subject to the locker service, while anything earlier (API 39 or lower) will not. You can think of this a bit like the ‘without sharing’ keyword - apply that to an Apex class and it bypasses sharing settings, and apply API 39 to any Lightning Component and it will bypass the locker service. From the horse’s mouth (the release notes preview) :

When a component is set to at least API version 40.0, which is the version for Summer ’17, LockerService is enabled. LockerService is disabled for any component created before Summer ’17 because these components have an API version less than 40.0. To disable LockerService for a component, set its API version to 39.0 or lower.

I think this solution is pretty cool - it allows existing code to continue working while enforcing appropriate security on new code - whoever at Salesforce managed to persuade the security team to go this route, kudos to you!

Note that this is from the preview release notes so the situation could change, although let’s hope it doesn’t!

Use These Powers for Good

This new functionality shouldn’t be taken as an invitation to allow your Lightning Components to blaze a trail of destruction on every page that is unfortunate enough to include them. It should only be used as a last resort going forward. If for no other reason than it ties your component to an ageing API version so you’ll miss out on all the cool stuff that comes in the future.

Related Posts


Saturday, 22 April 2017

Salesforce Health Check Custom Baseline

Salesforce Health Check Custom Baseline


The Salesforce Health Check has been around for a year or so now, debuting in the Spring 16 release of Salesforce (and bearing a striking resemblance to an app exchange listing with the same name).  The Salesforce Help topic gives chapter and verse on this so I’m not going to spend any time on the basic functionality, except to say that it’s a great tool for allowing you to see at a glance how your Salesforce org shapes up security-wise. There has been one caveat though, the baseline it is compared against is set by Salesforce not you, which means that if your security standard differs from the one true path you’ll see warnings and errors. As anyone who has accepted a unit test failure for more than one build knows, as soon as people expect errors they stop counting how many there are. Thus you may start out accepting a single warning, before you know it you have a number of potential security problems which are being ignored because “that page always shows errors”.

Custom Baselines

Spring 17 introduced the beta of custom baselines - this allows you to deviate from the Salesforce standard and supply your own baseline which reflects your security requirements. From now on if your Health Check page shows an error or exception, that means you have a real security issue and need to deal with it quickly.

While you could create a custom baseline from scratch, the easiest way is to export the standard baseline and amend it. Navigate to Setup -> Security Controls -> Health Check and click the gear icon, then ‘Export XML’ from the resulting context menu:


Screen Shot 2017 04 22 at 15 27 33


This downloads the baseline to a file named ‘baseline.xml’ (or baseline (1,2,3,etc).xml if you keep downloading it to the same place on a mac!), which you can then open in your favourite editor - I like Atom for XML files. Again, the Salesforce Help does a great job of explaining the format of the XML file so I’m not going to cover this. A couple of things to bear in mind:

  • You must change the Name and DeveloperName of the Baseline element, otherwise you’ll be trying to overwrite the standard, which you can’t do.
  • When you import the file, do it via the Lightning Experience. If you try this in class and you get an error you get no information that an error has occurred. According to the help “If your import fails, you receive a detailed message in Lightning Experience to help you resolve the problem”, which is pretty big talk when the actual message is Screen Shot 2017 04 22 at 16 03 16

Changing the Baseline

One area where my dev org is considered substandard is the password expiration time. I have my passwords set up never to expire, as forcing users to change their passwords regularly often results in them choosing predictable passwords that are easier to break. The Salesforce health check standard generates a Medium Risk alert if the value is over 90 days and a High Risk alert if the value is over 180 days.

Screen Shot 2017 04 22 at 15 40 22

Here’s the section of the file that configures this:

Screen Shot 2017 04 22 at 15 41 05

If I change the standard value to the numeric equivalent of Never Expires, 2147483647.0, and the warning to one higher:

Screen Shot 2017 04 22 at 15 57 54

and import the updated XML file using the context menu shown above, I can then switch my Health Check to the custom baseline and my password expiration is now at a satisfactory level:

Screen Shot 2017 04 22 at 16 05 10

I am not a security consultant

Notwithstanding the fact that forcing users to change their passwords regularly is out of favour in some places, you should not take this post as my advising you about your password policies in any shape or form. If you base your security settings on things that you read in random blog posts then best of luck to you - I did it in a dev org to show the functionality as there’s nothing that I really care about in there.

I’d expect the majority of custom baselines to be making the security standard more restrictive, in regulated industries for example, but what you should set up is a baseline that aligns with your corporate security policies.

Here comes the wish list

Anyone familiar with my blogs or Medium stories knows that I usually have a wish list around Salesforce functionality, so if any product managers are reading this, here’s what I’d like to see:

  • A way to email out the health check, run against a custom baseline, on a schedule. Security and compliance departments can receive this first thing in the morning and spend the day focusing on other systems.
  • Notifications when the health check result changes - if my Evil Co-Worker blags admin rights and changes the configuration to allow previous passwords to be re-used, I want to know about it. (Ideally I’d receive an automated report at the end of every day detailing everything the Evil Co-Worker has done, but that might be asking too much).
  • A way to snapshot the health check output regularly, so that I can see if an org is trending towards a more or less baseline compliant security setup. 
  • Custom entries - for example, I can easily spin through the ApexClass sobjects and figure out how many aren’t using ‘with sharing’. Security isn’t just about configuration, it’s also about code!

Related Posts


Saturday, 15 April 2017

Lightning Design System in Visualforce Part 3 - Built In SLDS

Lightning Design System in Visualforce Part 3 - Built In SLDS



In the past, using the Salesforce Lightning Design System (LDS) in Visualforce (or Lightning Components for that matter) required downloading the latest version from the home page and uploading it as a static resource to each Salesforce org that you wanted to use it on. I dread to think how many copies of exactly the same zip file have been uploaded over the last 18 months or so, but I’d imagine a significant amount of storage is currently dedicated to just this purpose. Probably only beaten out by a million copies of jQuery and Bootstrap. In the Spring 17 release of Salesforce, this is no longer the case - a single Visualforce tag can now do the heavy lifting for you.

The SLDS Tag

Simply adding <apex:slds /> to your page and nesting your markup in a div styled with the slds-scope class, and you are good to go. For example, the following page:

<apex:page showHeader="false" sidebar="false" standardStylesheets="false"
           standardController="Account" applyHTmlTag="false">
    <html xmlns="" xmlns:xlink="">
            <apex:slds />
            <div class="slds-scope">
                <div class="slds-page-header" role="banner">
                    <div class="slds-grid">
                        <div class="slds-col slds-has-flexi-truncate">
                            <div class="slds-media slds-no-space slds-grow">
                                <div class="slds-media__figure">
                                    <svg aria-hidden="true" class="slds-icon slds-icon-standard-account">
                                        <use xlink:href="{!URLFOR($Asset.SLDS,
'/assets/icons/standard-sprite/svg/symbols.svg#account')}"></use> </svg> </div> <div class="slds-media__body"> <p class="slds-text-title--caps slds-line-height--reset">Account</p> <h1 class="slds-page-header__title slds-m-right--small slds-align-middle slds-truncate"
</h1> </div> </div> </div> </div> <ul class="slds-grid slds-page-header__detail-row"> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Description">
</p> <p class="slds-text-body--regular slds-truncate" title="{!Account.Description}">
</p> </li> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Industry">
</p>{!Account.Industry} </li> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Visualforce">
</p>No static resources were used! </li> </ul> </div> </div> </body> </html> </apex:page>

renders as:

Screen Shot 2017 04 15 at 12 29 30

which is pretty cool, and makes throwing a page together to test out some ideas in a new org a lot easier than it has been.

What about Images?

Without the LDS static resource, image references need to be handled a slightly different way, via the $Asset global. Use this wherever you’d use your static resource previously. E.g. in the example markup above, I use the $Asset global as follows:

<svg aria-hidden="true" class="slds-icon slds-icon-standard-account">
   <use xlink:href="{!URLFOR($Asset.SLDS, '/assets/icons/standard-sprite/svg/symbols.svg#account')}"></use>

although continuing the pattern of making sure SVG is difficult to use, you have to add a custom namespace to the page:

<html xmlns="" xmlns:xlink="">

and you can’t do that unless you turn off the standard Salesforce header, sidebar and stylesheets. If you see an SVG on a Salesforce page in the wild, take a moment to appreciate the hoops that the developer jumped though in order get it there.

So no more static resources?

Well that depends. The SLDS tag always pulls in the latest version of the Lightning Design System, so much depends on whether you want that behaviour.It means that things may change underneath you, possibly in a breaking way. If it’s for your internal Salesforce org and you have people who will be able to make any changes required by the latest version, then emphatically yes. If you are building pages for a consulting customer who expects them to continue working in the future with zero effort, then maybe not. As always, there is no substitute for thinking about how the application will be used, both now and in the future. 

Related Posts


Saturday, 11 March 2017

One Trigger to Rule Them All? It Depends.

One Trigger to Rule Them All? It Depends



Anyone involved in Salesforce development will be familiar with triggers and the religious wars around how they should be architected. My view is that, like many things in life, there is no right answer - it all depends on the context. The options pretty much boil down to one of the two following options:

One Trigger per Object

This approach mandates a single trigger file that handles all possible actions, declared along the lines of:

trigger AccountTrigger on Account (
  before insert, before update, before delete,
  after insert, after update, after delete, after undelete) {

  // trigger body

One Trigger per Object and Action

This approach takes the view that each trigger should handle a single action on an objectf:

trigger AccountTrigger_bu on Account (before update) {

  // trigger body
trigger AccountTrigger_au on Account (after update) {

  // trigger body

I’ve read many blog posts stating flat out that the first way is best practice. No nuances, it’s just the right way and should be used everywhere, always. Typically the reason for this is that is how the author does it, therefore everyone should. I strongly disagree with this view. Not that one trigger per object shouldn’t be used, but that it shouldn’t be used without applying some thought.

Note: one trigger per object and action is the maximum granularity - never have two triggers for the same action on the same object as then you have no control over the order of execution and this will inevitably bite you. Plus you’ve striped business logic across multiple locations and made life harder for everyone.

Consulting versus Product Development

The reason I have this view is that I work across consultancy and product development at BrightGen. Most of the work I do nowadays is related to our business accelerators, such as BrightMEDIA, but I still have Technical Architect responsibility across a number of consulting engagements, which are often implementations for companies that don’t have a lot of internal Salesforce expertise, and what they have isn’t the developer skill set.

One message I’m always repeating to our consultants is to have some empathy with our customers and think about those that come after us. Thus we use clicks not code and try to take the simplest approach that will work, so that we don’t leave the customer with a system that requires them to come back to us every time they need to change anything. Obviously we like to take our customers into service management after a consultancy engagement, but we want them to choose to do that based on the good job that we have done, rather than because we have locked them in by making things complex.

Sample Scenario

So here’s a hypothetical example - as part of a solution I need to take some action when a User is updated, but not for any other trigger events. At some point later a customer administrator wants to know if there is any automated processing that takes place after a user is inserted to triage a potential issue.

If I’ve gone with the one trigger per object and action combination, they can simply go to the setup page for the object in question and look at the triggers. The naming convention makes it clear that the only trigger in place is to handle the case when a user is updated, so they can stop this particular line of enquiry (assuming my Evil Co-Worker hasn’t chosen an inaccurate name just to cause trouble).

Screen Shot 2017 03 11 at 07 29 35

If I’ve gone with one trigger per object, the administrator is none the wiser. There is a single trigger, but nothing to indicate what it does. The administrator then has to look into the trigger code to figure out if there is any after insert processing. What they will then find is one of two things:

  • A load of wavy if statements checking the type of action - before vs after, insert vs update etc - and then calling out to external code. Most developers try to make sure that an external method is called only once, so you often end up with a wall of if statements for the administrator to enjoy
  • Delegation to a trigger handler, leaving the admin to look at another source file to try to figure out what is happening.

Now I don’t know about you, but if administrators are having to look at my source code, and even worse trying to understand Apex code to figure out something as basic as this, I’d feel like I’d done a pretty poor job.

Enter the Salesforce Optimizer

The Spring 17 Release introduced the Salesforce Optimiser - an external tool that analyses your implementation and sends you the results - here’s what it has to say about my triggers:

Screen Shot 2017 03 11 at 07 54 33

And there’s the dogma writ large again - a big red warning alert saying I should have one trigger per object, just because. Don’t get me wrong, I think the Salesforce Optimizer is a great idea and has the potential to be a real time saver, and that the intention is to help, but it’s a really blunt instrument that presents opinion as fact.

The chances are at some point my customers will run this and ask me why I’ve gone against the recommended approach, even though in their case it is absolutely the appropriate approach. I find I have no problem explaining this to customers, but I do have to take the time to do that. Thanks for throwing me under the bus Salesforce!

In Conclusion

What you shouldn’t take away from the above is that one trigger per object is the wrong approach - in many situations it’s absolutely the right approach and it’s the one I use in some of my product development. In other situations it isn’t the right approach and I don’t use it. What you should take away is that it’s important to think about how to use triggers for every project you undertake - going in with a dogmatic view that there is one true way to do things and everything will be brute-forced into that may make you feel like a l33t developer but is unlikely to be helpful in the long term. it may also mark you out as a Rogue High Performer, and you really don’t want that.


Sunday, 5 March 2017

Salesforce DX Week 1

SalesforceDX Week 1

(NOTE: This post is based on the SalesforceDX pilot which, like all pilots, may never make it to GA. I bet it does though!)



The SalesforceDX pilot started a week or two ago and BrightGen were lucky enough to be selected to participate (thanks to the sterling efforts of my colleague Kieran Maguire who didn’t screw up his signup, unlike me!). This week I’ve managed to spend a reasonable amount of time reading the docs and trying out the basics and it’s clear already that this is going to be a game changer. 

There will be bugs!

While this isn’t the first  pilot that I’ve been involved in, but it’s by far the largest in terms of new functionality - a new version of the IDE, a new CLI with a ton of commands and a new type of org. A pilot is a two way street - you get to play with the new feature long before it becomes (if it ever does!) GA, but the flip side is that this won’t be tested to destruction like a GA feature. With the best will in the world there’s no way that Wade Wagner and co could test out every possible scenario, so some stuff will break, and that’s okay. When things break (or work in a non-intuitive way) you sometimes get a chance to influence how the fix works, which is pretty cool. Be a grown up though - report potential issues in a measured way with as much detail as you can gather - it’s always embarrassing when you have to climb down from a high horse when you realise that you made the mistake, not the tool!

Scratch Orgs

Scratch orgs are probably the feature I’ve been most excited about in SalesforceDX. I run the BrightMEDIA team at BrightGen and setting up a developer org for a new member of our team takes around half a day. After the initial setup, every release needs to be executed on each dev org as well as the target customer or demo org(s), which consumes a fair amount of time with a weekly release cadence. There’s also the problem of experimentation - often devs will try something out, realise it’s not the best way to do it, but not tear down everything they built. Over time the dev org picks up baggage which the dev has to be careful doesn’t make its way into version control.

Scratch orgs mitigate the first problem and solve the second. A scratch org is ephemeral - it is created quickly from configuration and should only last for the duration of the development task you are carrying out. When we setup a developer edition we have to contact Salesforce support to get the apex character limit increased and multi-currency enabled. Scratch orgs already have an increased character limit and features can be defined in the configuration. Here’s the scratch org configuration file for one of my projects:

  "Company": "KAB DEV",
  "Country": "GB",
  "LastName": "kbowden",
  "Email": "",
  "Edition": "Developer",
  "Features": "Communities;MultiCurrency",
  "OrgPreferences" : {
    "ChatterEnabled": true,
    "S1DesktopEnabled" : true,
    "NetworksEnabled": true,
    "Translation" : true,
        "PathAssistantsEnabled" : true

The features attribute : 

"Features": "Communities;MultiCurrency"

enables communities and multi-currency when my org is created, saving me a couple of hours raising a case and waiting for a response right off the bat.

Creating a Scratch Org

Is a single command utilising the new CLI:

> sfdx force:org:create --definitionfile config/workspace-scratch-def.json

 and it’s fast. I’ve just created an org for the purposes of this blog and I’d be surprised if it took more than a minute, although the DNS propagation of the new org name can take a few more minutes. You don’t have to worry about passwords with scratch orgs, it’s all handled by the CLI. To “login” I just execute:

> sfdx force:org:open

and a browser window opens and I’m good to go. Accessing the Manage Currencies setup node shows that multi-currency has indeed been enabled.

Screen Shot 2017 03 04 at 15 46 04

There’s a bit more to it than this in our case - a few packages have to be installed for example - but so far it looks like I can script all of this, which means a new developer just runs a single command to get an org they can start work in. Note that there’s just the standard developer edition data in here - I haven’t found time to play with the data export/import side of the CLI yet so that will have to wait for another day.

Managing Code

If you are familiar with the git paradigm of pulling and pushing changes from/to a remote location, the SalesforceDX source management is simple to pick up. You don’t get version control, but you do get automatic detection of what has changed and where. The docs state that this functionality is only available for scratch orgs and we still have to use the metadata API to push to sandbox/production orgs, which seems fair enough a pilot to me.

Detecting Differences

In my scratch org I create a simple lightning component in the developer console:

<aura:component >
	<h1>I'm a simple Lightning Component</h1>

In current development process I have a script to extract the Lightning metadata and copy it into my source directory. With scratch orgs it’s a fair bit easier.

I can figure out what has changed by running the status subcommand:

> sfdx force:source:status

State Full Name Type Workspace Path
────────── ───────── ──────────────
Remote Add Simple AuraDefinitionBundle

Pulling Code from the Scratch Org

 to extract the new code from the org to my workspace in local filesystem:

> sfdx force:source:pull

State Full Name Type Workspace Path
─────── ───────── ──────────────────── ───────────────────────────────────────────────────────────
Changed Simple AuraDefinitionBundle /Users/kbowden/SFDX/Blog/force-app/main/default/aura/Simple

I can then list the contents of my workspace and there is my new component:

> ls force-app/main/default/aura/


Pushing Code to the Scratch Org

If I edit the component locally, the status subcommand picks that up too:

> sfdx force:source:status

State Full Name Type Workspace Path
───────────── ───────────────── ──────────────────── ──────────────────────────────────────────────────────
Local Changed Simple/Simple.cmp AuraDefinitionBundle force-app/main/default/aura/Simple/Simple.cmp-meta.xml
Local Changed Simple/Simple.cmp AuraDefinitionBundle force-app/main/default/aura/Simple/Simple.cmp

and I can publish these changes to the scratch org via the push subcommand:

> sfdx force:source:push

State Full Name Type Workspace Path
─────── ───────── ──────────────────── ──────────────────────────────────
Changed Simple AuraDefinitionBundle force-app/main/default/aura/Simple

Screen Shot 2017 03 05 at 07 49 03

Scratch Orgs are Temporary

Unlike developer orgs, scratch orgs are not intended to persist. In fact I’ve seen docs that state they may be deleted at any point in time. Although I’d imagine in reality it will be based on lack of use, it doesn’t matter as if your scratch org disappears, you can just spin up a new one with the same setup, push your local code and you are back where you were. This does mean you need to treat your local filesystem as the source of the truth, but that’s pretty much how I work anyway.

This way scratch orgs don’t accumulate any baggage, and you don’t have to worry about destroying anything. If you don’t put it into version control, it won’t be there in the future.

One Org Shape to Rule them All

The configuration, data and code that make up your scratch org can be considered a template, especially if the setup is all scripted. This means that my team and I just need to update a single org shape “template" with changes that need to be applied to every development environment. Then we just spin up new scratch orgs and we can be sure that we are all in step with each other, which will save us time on many levels. 

Related Posts


Sunday, 19 February 2017

Lightning Design System in Visualforce Part 2 - Forms

Lightning Design System in Visualforce Part 2 - Forms

(Update 20/02/2017 - added the sample to the github repo - see the Any Code? section) 



In Part 1 of this series I covered getting started with the lightning design system for Visualforce developers. The example in that post was a page with a thin veneer of Visualforce, but with content that was pretty much vanilla HTML. In this post I’ll be making much more use of standard Visualforce components, which means I have to make some compromises. What I’m looking for here is to marry the speed of Visualforce development (provided by the standard component library) with the the modern styling of the Lightning Design System (LDS) rather than a pixel for pixel match with the Lightning Experience. Done is better than perfect!

Spring 17 and the LDS


In the original post the LDS was uploaded as a static resource, but Spring 17 means that this is no longer necessary - as long as you can live with the consequences.

A new Visualforce tag is available - <apex:slds>. This brings in the latest version of the LDS, hence my reference to consequences. If you can accept always being upgraded to the latest version as soon as it is available, this is the tag for you. If you need to fix the version (which I think I would, so that customer users don’t suddenly get presented with an unexpected change) then stick with the static resource. There are a few rules around using this tag, which are explained in the official docs.

<apex:page showHeader="false" sidebar="false" standardStylesheets="true"
           standardController="Contact" applyHTmlTag="false">
    <html xmlns="" xmlns:xlink="">
            <apex:slds />
        <body class="slds-scope">

As I’m including the SLDS via the HTML header, I have to specify the slds-scope class for the body tag in order to be able to use the SLDS tags. Interestingly the docs state that if I’m showing the header, sidebar or using the standard stylesheets then I can’t add attributes to the html tag and thus SVG icons aren’t supported. However, I am using the standard stylesheets and they are still working for me, at least in Firefox, so go figure. If this doesn’t work for you, you’ll need to switch the icons to another format.


If you don’t upload the LDS as a static resource, you’ll need to get the assets (icons etc) from the system default. Enter the $Asset global variable, another new feature in Spring 17. Simply use $Asset.SLDS in place of your static resource, and you can access assets via the URLFOR function. Again more details in the official docs.

<div class="slds-media__figure">
    <svg aria-hidden="true" class="slds-icon slds-icon-standard-contact">
        <use xlink:href="{!URLFOR($Asset.SLDS, '/assets/icons/standard-sprite/svg/symbols.svg#contact')}"></use>

Styling Inputs

The key to applying the LDS to standard Visualforce form components is the styleClass attribute - this allows a custom style to override the standard Visualforce styling that we all know and love (!). 

Using a Visualforce standard component inside an SLDS styled form element doesn’t look too bad - just a little truncated, The following markup:

<apex:inputField value="{!Contact.FirstName}"/>


Screen Shot 2017 02 19 at 07 41 37

Supplying the SLDS style class fixes this:

<apex:inputField styleClass="slds-input" value="{!Contact.FirstName}"/>


Screen Shot 2017 02 19 at 07 46 42


Buttons are another simple fix - I can still use command buttons, just styled for the LDS:

<div class="slds-p-horizontal--small slds-m-top--medium slds-size--1-of-1 slds-align--absolute-center">
    <apex:commandButton styleClass="slds-button slds-button--neutral" value="Cancel" action="{!cancel}" />
    <apex:commandButton styleClass="slds-button slds-button--brand" value="Save" action="{!save}" />

Screen Shot 2017 02 19 at 08 14 12

One size does not fit all

While the style class works well for simple inputs, fields which require more complex widgets are where the compromises come in. Lookups, for example, are very different in the LDS and Visualforce. In this case I have to live with the fact that the search will produce a popup window and the input will have a magnifying glass, but I add some styling to make it less jarring on the user:

<apex:inputField style="width:97%; line-height:1.875em;" value="{!Contact.AccountId}" />

which renders as:

Screen Shot 2017 02 19 at 07 54 55

So not perfect but not terrible either.

Required Fields

Required fields mean a bigger compromise, as I have to add the required styling myself. My page markup therefore knows which fields are required and which aren’t, which in turn makes the page less flexible. If an administrator makes one of the fields required, basic Visualforce skills are required to change the page to reflect this:

<div class="slds-form-element slds-hint-parent">
    <span class="slds-form-element__label"><abbr class="slds-required" title="required">*</abbr>Last Name</span>
    <div class="slds-form-element__control">
        <apex:inputField styleClass="slds-input" value="{!Contact.LastName}"/>

Screen Shot 2017 02 19 at 08 08 17

The end result

So here’s the final page - clearly not an exact match for LEX, but pretty close, and put together very quickly.

Screen Shot 2017 02 19 at 08 10 11

Any code?

As usual with LDS posts, the code is in my LDS Samples Github Repository. There’s also an unmanaged package available to save wasting time copying and pasting - see the README.

In Conclusion

Pragmatism is key here - there are some compromises around styling and losing some of the separation of the page and business logic, but I feel these are outweighed by the sheer speed of development. Of course I could switch to using vanilla HTML with LDS styling and manage the inputs via JavaScript, but if I’m going that route I’ll go the whole hog and use Lightning Components.

Related Posts