Skip to main content

Preparing Multi-Brand and Multi-Environment Tests

Learn how to effectively design a chatbot test strategy for multiple brands and environments, using best practices to streamline Botium setup.

When creating a chatbot test strategy from the ground up, you might encounter scenarios like:
  • Managing a chatbot across five brands, with mostly similar conversations but slight variations.
  • Testing across development, test, and production environments.
If any of this sounds familiar, keep reading. This article will guide you through best practices for configuring Botium to meet these needs.

Features and Techniques Overview

For setting up Botium we will use several techniques that are actually independent of each other, but in combination they are incredibly powerful.

  • Wildcard Matching: When asserting chatbot answers, wildcards ("jokers") can be used to accept any text. This is nothing special to Botium, but it comes in handy when asserting content for different brands.

  • Scripting Memory / Test Parameter Store: With the Botium Scripting Memory it is possible to inject dynamically generated or static values into your test cases. We will use this concept to set different conversation parameters for each environment your tests should run against.

    For information about the scripting memory, see Using the Scripting Memory.

  • Test Set Dependencies: In Botium it is possible to define dependencies between test sets and combine them into a single test set. We will use this technique to separate the different requirements into individual test sets and combine them as needed.

  • Environment-specific Test Project Capabilities: In Botium it is possible to define environment-specific capabilities which will be merged with the chatbot capabilities. So it is sufficient to define the basic chatbot capabilities only once, and then add environment-specific adaptions on Test Project level (f.e. selecting a different IBM Watson Assistant workspace or a different HTTP endpoint).

Resulting Test Sets

In the end there will be several new objects in Botium:

  • There will be only one chatbot defined

  • There will be one shared test set holding the test cases valid for all brands (with placeholders)

  • For each brand, there will be a brand-specific test set with brand-specific test cases and brand-specific scripting memory

  • For each combination of brand + environment you have to run your tests, there will be one test project combining:

    • the chatbot, enhanced with environment-specific capabilities

    • the brand-specific test set with the scripting memory files

    • the shared test set

Step By Step

Now comes the interesting part - follow those steps to setup the basic structure in Botium.

  1. Connect to IBM Watson Assistant: In this example, we will use IBM Watson Assistant, but the same principle works for all supported technologies. We are connecting the chatbot to the Assistant's development workspace, so we can use it for developing the test cases. When running test cases later we will connect to the environment-specific Assistant workspaces by overwriting this from the Test Project.

  2. Create a Shared Convos Test Set with Wildcards: Create a test set named Shared Convos in Botium. Add some first Convos in the Visual Convo Designer. The convos should map the conversation structure, and they should be free from any brand-specific content by using wildcards.

    We have here a convo named TC_HELLO, which sends a default greeting to the chatbot, and expects a default greeting back:

    Note: Here is the corresponding BotiumScript (for Copy & Paste): Note the use of the * as a wildcard - this is the spot where the brand name would be shown.
    TC_HELLO
    
    #me
    Hello
    
    #bot
    Hello, this is Heinz, the chatbot of *. How can I help you ?
  3. Create Brand-Specific Test Cases with Scripting Memory (optional): The above test case would assert that:

    • the chatbot introduces itself as Heinz

    • and that there is any brand name included

    But for your brands, you want to make sure that:

    • Each brand chooses a different name for the chatbot

    • We want to additionally assert on the brand name (not accept just anything with the wildcard)

    Do the following for each of the brands:

    1. Create a new Test Set: For each brand create a brand-specific test set.
    2. Add Brand Parameters: The brand-specific parameters will be saved in Scripting Memory files. Create a test set named Params BRAND-1 and add a YAML-file named Scripting Memory:

    3. Define Brand Variables: In this file, we define the variables that we will use in our test cases, like chatbot name and brand name:
      scriptingMemory:
        - header:
            name: heinz
          values:
            $chatbot_name: Heinz
            $brand_name: My first brand
      Note: For another brand, the Test Set Params BRAND-2 can look roughly the same, but with different variable values:
      scriptingMemory:
        - header:
            name: anna
          values:
            $chatbot_name: Anna
            $brand_name: Another brand
    4. Enable the Scripting Memory: Enable the scripting memory for each test set in Botium Tools & Settings > Test Sets > Your Test Set > Configuration > Scripting
      • Enable the switch Enable Scripting Memory

      • Enable the switch Enable Test Parameter Store



  4. Adapt Shared Convos with brand-specific content (optional): The shared test cases from above now have to be changed to use the placeholders for the chatbot name and brand name, instead of just using a wildcard. Replace the corresponding spots in the test case with the variable name:
    TC_HELLO
    
    #me
    Hello
    
    #bot
    Hello, this is $chatbot_name, the chatbot of $brand_name. How can I help you ?
    This means that before running a test case, the variables are filled from the scripting memory files you defined upfront, therefore replacing those variables with concrete chatbot names and brand names for doing the assertions.
  5. Create Test Projects: Now go the Test Suite and register a new test project to combine everything from above and apply environment-specific settings:
    1. As the Test Project Name choose something like BRAND-1 DEV

    2. Select the Chatbot

    3. Select the Shared Convos Test Set and the brand-specific PARAMS BRAND-1 Test Set

    4. Save the Test Project.

    5. Now go to the project's Configuration: Configuration > Advanced. In the Advanced section, you can now overwrite the capabilities for the chatbot with environment-specific settings.
      Tip: You can find the name of the capability (the basic configuration items for the Botium Connectors) either in the connector documentation, or in the Advanced Mode of the Chatbot connector settings.


      In this case, we have to overwrite the IBM Watson Assistant workspace ID to connect it to a different workspace. Repeat the above steps for the other brands and environments, and name the Test Projects accordingly (BRAND-2 TEST, BRAND-1 PROD, ...).

      Everything is ready now for running your brand- and environment-specific test cases.

Was this article helpful?

0 out of 0 found this helpful