Powering Salesforce Lightning Connect with MongoDB


One of the more powerful new features in Salesforce (GA Winter ‘15) is the ability to create external objects from external data sources using Lightning Connect. For the most part these external objects act in the same way as a custom object (though there are some limitations I will note at the end). Lightning Connect helps solve several issues commonly faced by organizations that want to present data in Salesforce, but do not necessarily want to store it within Salesforce. This could be to reduce integration complexity or concerns over security or performance. To get started all you need is a data store exposed via an OData 2.0 endpoint, and in this blog I will show you how to set one up that uses MongoDB as a backend.


I won’t get too much into OData here, but it is an open protocol for interfacing with REST APIs. You can read more about it at the official site here.

This tutorial will show you how to create your own OData endpoint on top of a MongoDB database. My database contains data about colleges and universities, such as enrollment numbers and tuition costs.

I will be using a package from the JayData project called ‘odata-server’, that will take a schema and create an OData endpoint that can be used by any OData consumer, in our case Lightning Connect, which will allow us to expose this data within Salesforce.

More Info

For this tutorial I am assuming you have some prerequisites installed (Node.js, Git and the Heroku toolbelt), as well as a mongodb available (my database is hosted at mongolab.com).

The first thing we are going to do is create a new Node.js project:

mkdir lightning-college && cd lightning-college
git init
npm init #follow the prompts
git add .
git commit -m \”initial commit\”

This will create our project and initialize our git repository. Next we will add the ‘odata-server’ package we will be using.

npm install odata-server –save
git add package.json
git commit -m “added odata-server”

Now we have everything we need to configure and run our server.

The configuration is two parts: the server configuration and the schema configuration. The server configuration will connect to our mongodb instance, and the typed schema configuration will define the data types contained in the document values.

Next let’s create our server.js file:

require(\’odata-server\’);var config = {
database: \’college-data\’,
provider: {
server: \’ds033607.mongolab.com:33607\’,
databaseName: \’college-data\’,
user: \’test_user\’,
username: \’test_user\’,
password: \’password\’
};$data.createODataServer(config, \’/colleges.svc\’, process.env.PORT || 5000);

Now that we have our server configured, we need to define the schema. Each institution in our college-data database is a document. Here is an example of a document from my institutions collection in the college-data database:


For each of these fields we define a type, as OData requires typed entities: institution.js:

exports = $data.Entity.extend(\”colleges.institution\”, {
UnitID: {key: true, type: \”int\”},
Institution_Name: {type: \”string\”},
Year: {type: \”int\”},
Percent_Admitted: {type: \”int\”},
Admissions_Yield: {type: \”int\”},
Tuition_2013: {type: \”int\”},
Tuition_2014: {type: \”int\”},
Cost_In_State_2014: {type: \”int\”},
Cost_Out_Of_State_2014: {type: \”int\”},
State: {type: \”string\”},
Total_Enrollment: {type: \”int\”},
Undergraduate_Enrollment: {type: \”int\”},
Graduate_Enrollment: {type: \”int\”},
Graduation_Rate: {type: \”int\”},
Graudation_Rate_men: {type: \”int\”},
Graduation_Rate_women: {type: \”int\”},
SAT_Reading_75th_percentile: {type: \”int\”},
SAT_Math_75th_percentile: {type: \”int\”},
SAT_Writing_75th_percentile: {type: \”int\”},
ACT_Composite: {type: \”int\”}

One thing to note is the UnitID field has the key value set to true. At least one key is required in order for an OData endpoint to work with lightning connect.

Because our only entity is ‘institution’ we can now define our whole data model. Institutions will be an entity set (the collection of entities) under our main entry point, the entity context.

require(\’./institution\’);$data.EntityContext.extend(\”colleges\”, {
\”institutions\”: {type: $data.EntitySet, elementType: colleges.institution}
});module.exports = exports = colleges;

Once our model is defined we need to update the server configuration to point to our new context:

require(\’./model\’);var config = {
type: colleges,
database: \’college-data\’,
provider: {
server: \’ds033607.mongolab.com:33607\’,
databaseName: \’college-data\’,
user: \’test_user\’,
username: \’test_user\’,
password: \’password\’
};$data.createODataServer(config, \’/colleges.svc\’, process.env.PORT || 5000);

We are all ready to test at this point. Run the server with the command

node server.js

and open http://localhost:5000/colleges.svc/$metadata in your browser and you should see an xml representation of your schema.


All we need now is our Procfile and we are ready for Heroku deployment of our server:


Add all of these new and changed files to your git repository:

git add package.json Procfile institution.js model.js server.js
git commit -m “OData server for colleges db”

And finally create and deploy your heroku app:

heroku create
git push heroku master
heroku open

The last thing you have to do, is the easiest part: adding your endpoint as an External Data Source in Salesforce. Log in to your Salesforce org and go to Setup->Develop->External Data Sources.

Create a new Lightning Connect: OData 2.0 Data Source. I named mine ‘Colleges’, selected ‘Include in Salesforce Search’ and left all the other options. The endpoint will be your OData server endpoint on Heroku.

Save this and then select “Validate and Sync”

Select the ‘institutions’ table and sync. This will create the institutions object in your org that you may now use mostly like a custom object.

List Views:


Detail Pages:


Lookup Relationships using Indirect Lookup:


Considerations and Limitations

While Lightning connect is an exciting new tool in Salesforce, it is early in its development and there are always some limitations to consider. The limitations are all documented in the help here, but I’ll just point out a few considerations of note. Although external objects act very similarly to a regular custom object you cannot use triggers, formulas, or validation rules with them. Also, there is no reporting functionality, or record level sharing with external objects (so no sharing rules). For high data volumes (>50k records) Salesforce1 integration and search will not work.


Lightning connect is a powerful tool that makes the once immensely complex task of integrating a backend data store with Salesforce relatively simple. It works with any endpoint that supports the OData 2.0 protocol, and as an open standard it is not difficult to find a connector or service. This provides a custom object-like experience to the end user, without having to maintain an integration. This is just the beginning, and I for one can’t wait to see what else is coming down the Lightning Connect pipe (read/write anyone?)

You can see the source for this project here.

Leave a Comment

Your email address will not be published. Required fields are marked *

Select Language »