{"id":98,"date":"2019-07-08T14:46:17","date_gmt":"2019-07-08T14:46:17","guid":{"rendered":"https:\/\/www.bddtesting.com\/?page_id=98"},"modified":"2019-07-08T14:46:17","modified_gmt":"2019-07-08T14:46:17","slug":"story-based-bdd-an-alternative-approach-testing-with-ember","status":"publish","type":"page","link":"https:\/\/www.bddtesting.com\/story-based-bdd-an-alternative-approach-testing-with-ember\/","title":{"rendered":"Story-Based BDD – An alternative approach testing with Ember"},"content":{"rendered":"\n
In this post we want to introduce you to a way, to test your Ember application: story-based BDD. BDD stand for\u00a0Behaviour Driven Development<\/a>. It combines the ideas of TDD (Test Driven Development<\/a>) with DDD (Domain Driven Design<\/a>).<\/p>\n\n\n\n At its core is an ubiquitous, domain-specific language (DSL) using natural language constructs, that serves to improve understanding and collaboration between non-technical people with the domain knowledge (business analysts, product owners etc.) and developers. This language is used to describe the intended behavior of the software as a series of examples (“specification by example”).<\/p>\n\n\n\n To give you a grasp on how that could look like, here is a real world example from one of our apps, a specification of how the cookie disclaimer we all learned to love should behave:<\/p>\n\n\n\n At the same time, by being a semi-formal parseable language, often formalized using theGherkin syntax<\/a> as shown above, it serves as the foundation for implementing your (acceptance) tests: it provides the test specification<\/em>, that describes what<\/em> to test. Obviously that won’t be executable by itself, so the developer still has to provide the test implementation<\/em>, which defines how<\/em> to test it.<\/p>\n\n\n\n If that maybe sounded a bit too broad to you, and you wonder how that actually translates into executable tests, I can understand you. So let’s dive into some code. Of course we don’t want to start from scratch, implementing our tests based on this paradigm. But luckily that are a number of solid test frameworks for story-based BDD for various languages: there is Cucumber for various languages, including Ruby, Java and JavaScript, Behave for Python or Behat for PHP, to just name a few. And guess what, there is also an Ember addon Important to note here is that it will not<\/em> replace all your testing stack with something completely new. To the contrary, it will run on top of either QUnit or Mocha (whatever you already use), and for the test implementation you continue to have all the testing tools from Ember and its ecosystem at your hands: Also you can write some tests the BDD-way, and others using the traditional approach using QUnit or Mocha directly. In fact, as BDD’s natural language capability is quite suitable for describing user stories, I think it is quite common to write acceptance tests in that way, and continue to use the more lower level test frameworks for the lower level test types, like unit or component integration tests. At least that’s what we commonly do, so I will focus on acceptance tests for the remainder of this blog post. But technically there is nothing that would prevent you from using this addon for any type of test.<\/p>\n\n\n\n But now, let’s finally get our hands dirty! As you can imagine, it all starts with this command:<\/p>\n\n\n\n This will automatically add a few files to our project, some I will come back to a bit later.<\/p>\n\n\n\n For the purpose of this tutorial, we will implement some basic tests that will cover the error handling of our app. So let’s create our first BDD feature:<\/p>\n\n\n\n This will generate the feature file Let’s replace the dummy feature file with something more meaningful. For testing that our handling of page not found errors works correctly, we could specify that as follows:<\/p>\n\n\n\n So let’s quickly go through this line by line:<\/p>\n\n\n\n You might wonder what happens when we run our tests now. Obviously this cannot really work, as our computer, dumb as it is, cannot know what all this actually means. And as such, when running your tests (either by Obviously the framework is not capable of natural language processing, and has no machine learning algorithms implemented, although that would probably boost its marketing success considerably, so unfortunately we still have to implement all of this by ourselves. The frameworks “just” provides a mechanism to map these natural language constructs to actual concrete implementations.<\/p>\n\n\n\n So let’s implement our steps now. The first step is actually just a noop. In a real app, you would probably also have to implement a We have added a new Note that I have been using Mocha here, along with Chai for its more expressive set of assertions, which allow you to also write regular tests in a more readable BDD style. But this is not a requirement at all, you can use QUnit just as well!<\/p>\n\n\n\n Let’s add the second step now:<\/p>\n\n\n\n Technically these two steps are fine, but they fail at one very important point: they are not really reusable. But this is one of the benefits of BDD I especially appreciate: if done right, it not only encourages, but even enforces reusability of test implementation. So what’s wrong with our steps so far?<\/p>\n\n\n\n The first one, even as it literally does nothing, is not reusable, as it is defined in The same can be said for the second step, but it suffers from another drawback: it has the URL to visit hardcoded. In a real app you would have dozens or even hundreds of different URLs you want to visit in tests, and you certainly don’t want to duplicate the code for each of them.<\/p>\n\n\n\n But we have you covered on this as well, by using so called parameterized steps: any occurrence of a word starting with Now we just have to implement our third step, which in this case is actually specific to the error feature, so we again add it to our This will cover our last step At this point, we should have all test steps implemented, so no more Going forward with our “Page not found” example, we might want to add tests for cases where the route itself is known, but the dynamic model is not available, i.e. the API called by the route’s model hook return a 404 response.<\/p>\n\n\n\n This leads us to tests based around loading data with With this simple step implementation, all of the following Given some advanced features like yadda converters and multiline steps with csv-based tabular data, even generic steps supporting overriding Mirage’s factories for certain properties are possible:<\/p>\n\n\n\n The implementation of this is skipped for now, an eventual follow-up blog post might go into more detail!<\/p>\n\n\n\n What remains to be done here is to actually setup Mirage for all our acceptance tests. Mirage provides the … we add the Having laid out the basics to test with Mirage, let’s get back to the previous example of testing our error page. A test specification for a not-found error due to a non-existing model could look like this:<\/p>\n\n\n\n As we already have the necessary steps implemented, and we don’t have a step that creates the blog-post model (here with ID 123), this should already work and correctly test the error handling of our application. What is annoying however is that you would have to duplicate this scenario for every URL you want to test, that is supposed to render an error page. In fact the only difference compared to the first error scenario mentioned at the beginning of this blog post is the URL in the visit step (here But there is another powerful solution available: kind-of similar to parameterized steps, we can parameterize whole scenarios with variadic data, using so called example tables:<\/p>\n\n\n\n We just unified both separate but almost identical scenarios into one scenario outline, where the difference is given by the example table following the This is useful to write similar tests with differing data, for example to cover different edge cases, without requiring any duplication. In our case we can use this to add coverage for even more aspects of our error handling:<\/p>\n\n\n\n https:\/\/www.kaliber5.de\/en\/blog\/story-based-bdd-an-alternative-approach-to-testing-with-ember\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":" In this post we want to introduce you to a way, to test your Ember application: story-based BDD. BDD stand for\u00a0Behaviour Driven Development. It combines the ideas of TDD (Test Driven Development) with DDD (Domain Driven Design). At its core is an …. Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"pagelayer_contact_templates":[],"_pagelayer_content":"","_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"class_list":["post-98","page","type-page","status-publish","hentry"],"yoast_head":"\n Scenario: Show a cookie disclaimer\n When I go to the authentication page\n Then I should see a cookie disclaimer\n When I click the accept cookies button\n Then I should not see a cookie disclaimer\n\n Scenario: Hide accepted disclaimer\n Given I have accepted the cookie disclaimer\n When I go to the authentication page\n Then I should not see a cookie disclaimer\n<\/code><\/pre>\n\n\n\n
OK, NICE, BUT NOW SHOW ME SOME CODE<\/h2>\n\n\n\n
ember-cli-yadda<\/code>, that builds on top the JavaScript BDD library Yadda<\/a>, a JavaScript implementation similar to Cucumber. Its responsibility is to integrate the parsing of Gherkin-based feature files, its execution based on the accompanying step files, and the integration of all that into the command you are all too familiar with:
ember test<\/code>.<\/p>\n\n\n\n
async<\/code>\/
await<\/code> with the beautiful test helpers from
@meber\/test-helpers<\/code>,
ember-test-selectors<\/code>,
ember-cli-page-object<\/code>,
ember-cli-mirage<\/code>, all covered…<\/p>\n\n\n\n
ember install ember-cli-yadda\n<\/code><\/pre>\n\n\n\n
MY FIRST FEATURE<\/h2>\n\n\n\n
ember g feature error\n<\/code><\/pre>\n\n\n\n
tests\/acceptance\/error.feature<\/code>, which holds our natural language test specification, along with its implementation in
tests\/acceptance\/steps\/error-steps.js<\/code>.<\/p>\n\n\n\n
@setupApplicationTest\nFeature: Error page\n\n Scenario: Error shown when visiting \/foo\n\n Given I am an unauthenticated user\n When I visit \/foo\n Then I should see the error message \"Page not found.\"\n<\/code><\/pre>\n\n\n\n
@setupApplicationTest<\/code> is a Yadda annotation, that already has a default implementation in
tests\/helpers\/yadda-annotations.js<\/code>, which will call the function with the same name, which you probably already know from your traditional tests, to setup all tests in this file as acceptance tests. There are a few other annotations pre-defined, like
@setupRenderingTest<\/code>,
@setupTest<\/code> and
@ignore<\/code>, but you are free to extend the existing implementations or introduce new annotations.<\/li>
Given<\/code>,
When<\/code> or
Then<\/code> (or
And<\/code> and
But<\/code>, which just repeats the previous line)<\/li><\/ul>\n\n\n\n
ember t -s<\/code> or
ember s<\/code> and opening
http:\/\/localhost:4200\/tests<\/code>), you will see an
Undefined step<\/code> error, for all three of the above steps.<\/p>\n\n\n\n
I am an authenticated user<\/code> step, using the authentication layer of your app, e.g.
ember-simple-auth<\/code>. But for the sake of simplicity, we will ignore this for our tutorial here. So our
tests\/acceptance\/steps\/error-steps.js<\/code>could look like this now:<\/p>\n\n\n\n
import steps from '.\/steps';\nimport { expect } from 'chai';\n\nexport default function() {\n return steps()\n .given('I am an unauthenticated user', function() {\n \/\/ nothing to do here...\n })\n ;\n}\n<\/code><\/pre>\n\n\n\n
.given()<\/code> definition to our steps dictionary. When you look at your test runner now, you will see that the first step does not show the
Undefined step<\/code> error anymore, as we have a matching step implementation for it now.<\/p>\n\n\n\n
import steps from '.\/steps';\nimport { expect } from 'chai';\nimport { visit } from '@ember\/test-helpers';\n\nexport default function() {\n return steps()\n .given('I am an unauthenticated user', function() {\n \/\/ nothing to do here...\n })\n .when('I visit \/foo', async function() {\n await visit('\/foo');\n })\n ;\n}\n<\/code><\/pre>\n\n\n\n
MAKE THINGS REUSABLE<\/h2>\n\n\n\n
error-steps.js<\/code>, which is only taken into account for the
error.feature<\/code>. But it’s easy to see that this step might be used in many different feature definitions as well. Luckily the solution is easy: as you can see we import another steps file
steps.js<\/code>, which is used exactly for this purpose: to hold all steps implementations that are reusable across features. Simply by moving the
.given()<\/code> step from
error-steps.js<\/code> to
steps.js<\/code>, we can reuse that step in any other feature.<\/p>\n\n\n\n
$<\/code> is treated as a parameter. In fact you can even use complex regular expressions with matching groups to extract parameters. So our formerly hardcoded URL becomes a variable parameter like this, making this step reusable for any other feature or scenario:<\/p>\n\n\n\n
.when('I visit $url', async function(url) {\n await visit(url);\n })\n<\/code><\/pre>\n\n\n\n
error-steps.js<\/code>:<\/p>\n\n\n\n
import steps from '.\/steps';\nimport { expect } from 'chai';\nimport { find } from '@ember\/test-helpers';\n\nexport default function() {\n return steps()\n .then('I should see the $type message \"$message\"', function(messageType, messageText) {\n let selector = `[data-test-${messageType}-message]`;\n expect(find(selector)).to.have.trimmed.text(messageText);\n })\n ;\n}\n<\/code><\/pre>\n\n\n\n
I should see the error message \"Page not found.\"<\/code>, and again by using parameters it will be reusable for different messages and also different message types (besides errors). In case you are wondering about the
trimmed.text()<\/code>assertion: this is available by using the
chai-dom<\/code><\/a> Chai plugin. Again you can use QUnit as well, in this case I would recommend using
qunit-dom<\/code><\/a> for your DOM-based assertions. Also this example uses a
data-test<\/code> attribute selector to find the element that will hold the error message, powered by ember-test-selectors<\/a>.<\/p>\n\n\n\n
Undefined step<\/code>errors, and given our app is working correctly, the test should pass now! \ud83c\udf89<\/p>\n\n\n\n
ADD SUPPORT FOR EMBER-CLI-MIRAGE<\/h2>\n\n\n\n
ember-data<\/code> and
ember-cli-mirage<\/code>, which provide some good examples where the reusable character of yadda’s steps is again very helpful. So for most acceptance tests based around dynamic data you would have to define some seed data. Mirage’s factories provide a powerful concept for this, and together with the parameterized steps we just learned about, this allows to create some easy to use, generic and reusable steps. Here is a simple one:<\/p>\n\n\n\n
.given('there (?:are|is) ?([0-9]*) ([a-z-]+)(?: models)? in my database', function(count, model) {\n server.createList(singularize(model), parseInt(count) || 10);\n })\n<\/code><\/pre>\n\n\n\n
Given<\/code> steps would be available in any new feature specification, without requiring any further implementation:<\/p>\n\n\n\n
Given there are users in my database\n Given there is a blog-post in my database\n Given there are 5 products in my database\n<\/code><\/pre>\n\n\n\n
Given there are the following products in my database:\n ---\n id | name | price | vat-rate |\n 1 | Foo | 99 | 19 |\n 2 | Bar | 199 | 7 |\n ---\n<\/code><\/pre>\n\n\n\n
SETUP MIRAGE<\/h3>\n\n\n\n
setupMirage<\/code> function for this, that should be called in our test’s
beforeEach()<\/code> hook. But how to do this, as
ember-cli-yadda<\/code> handles the tests setup and teardown logic (again based on either QUnit or Mocha) on its own. The solution are the already mentioned annotations. We declared our feature to use the
setupApplicationTest()<\/code> function (provided by either
ember-qunit<\/code> or
ember-mocha<\/code>) by using the
@setupApplicationTest<\/code> annotation. What this actually does is specified in your project’s
tests\/helpers\/yadda-annotations.js<\/code> file.
ember-cli-yadda<\/code>automatically installs a default implementation of these annotation, but as this file is under your own control as part of your repo, you can customize it at your will. Here we could introduce a new annotation like
@setupMirage<\/code> to call the setup function, or just extend the existing handling of the
@setupApplicationTest<\/code> annotation to call
setupMirage()<\/code> for every<\/em> application test in our test suite. We choose the latter option now: the
setupYaddaTest()<\/code> hook gets called for every scenario to return a function that’s responsible to setup the scenario’s test. So instead of the default implementation, that just delegates to
setupApplicationTest()<\/code>…<\/p>\n\n\n\n
function setupYaddaTest(annotations) {\n if (annotations.setupapplicationtest) {\n return setupApplicationTest;\n }\n \/\/ ...\n}\n<\/code><\/pre>\n\n\n\n
setupMirage()<\/code> call:<\/p>\n\n\n\n
import setupMirage from 'ember-cli-mirage\/test-support\/setup-mirage';\n\/\/ ...\nfunction setupYaddaTest(annotations) { \n if (annotations.setupapplicationtest) {\n return function() {\n let hooks = setupApplicationTest();\n setupMirage(hooks);\n }\n }\n \/\/ ...\n}\n<\/code><\/pre>\n\n\n\n
SCENARIO OUTLINES<\/h2>\n\n\n\n
Scenario: Error shown when visiting \/blog\/2\n\n Given I am an unauthenticated user\n When I visit \/blog\/123\n Then I should see the error message \"Page not found.\"\n<\/code><\/pre>\n\n\n\n
\/blog\/123<\/code> instead of
\/foo<\/code>).<\/p>\n\n\n\n
Scenario: Error shown when visiting [url]\n\n Given I am an unauthenticated user\n When I visit [url]\n Then I should see the error message \"Page not found.\"\n\n Where:\n\n | url |\n | \/foo |\n | \/blog\/123 |\n<\/code><\/pre>\n\n\n\n
Where<\/code> keyword. The first line specifies the parameter’s name, while all following lines provide the actual data, which generates a new scenario variant with the given data. Any reference to a parameter in brackets (like
[url]<\/code> here) will get replaced with the actual data.<\/p>\n\n\n\n
Scenario: Error shown when visiting [url]\n\n Given I am an [authstatus] user\n When I visit [url]\n Then I should see the error message \"[message]\"\n\n Where:\n\n | authstatus | url | message |\n | unauthenticated | \/foo | Page not found. |\n | unauthenticated | \/blog\/123 | Page not found. |\n | authenticated | \/private\/foo | Page not found. |\n | unauthenticated | \/private\/foo | Access denied. |\n | unauthenticated | \/private | Access denied. |\n<\/code><\/pre>\n\n\n\n
CONCLUSIONS<\/h2>\n\n\n\n
await visit()<\/code> call as the route is not yet known. So whether all the following test code actually works remains unclear. With story-based BDD you can at least write the test specification<\/em> first, which helps you to get a better understanding of what needs to be done, what edge cases to cover etc. The implementation<\/em> of that test can be done later, in sync with the actual implementation or even afterwards.<\/li>
ember-cli-yadda<\/code> in this case), when others are not able to do this in time. At least this was the case for me, when the awesome new Ember testing APIs (RFC268<\/a> in particular) were implemented in the default
ember-qunit<\/code>, but were not yet supported in
ember-mocha<\/code> and
ember-cli-yadda<\/code>. So I had to invest some time and contribute a bunch of PRs (if you are interested: ember-mocha: #190<\/a>, #192<\/a>, #199<\/a>, #200<\/a>, ember-cli-yadda: #65<\/a>, #70<\/a>, #71<\/a> ). But that was a bigger change that should not happen too often, so I think that risk is absolutely justifiable, but nevertheless you have been warned! \ud83d\ude09<\/li><\/ul>\n\n\n\n