Introduction

The purpose of this document is to think though the ways in which an author would test their rule. This started out as a question of interface. However, I quickly realized that the problem I was trying to answer required understanding the relationship of the structured natural language sentence to both the is.xalgo message, and the UI methods used for testing.

The kind of testing I am investigating here has to do with table logic. I want a user to be able to check:

  1. if there are permutations missing from their truth table.
  2. if all the fields in a column are filled out in a desired manner.
  3. that given a scenario, the intended ought.xa message is returned.

I am not interested in testing:

  1. the structure of the json
  2. if the rule will be surfaced in a specific query

Scenario

Input Conditions A B C
The used capacity of the box is >= 0.5 F T T
The measured type of the box == standard B B T
The contained value of the box is >= $100.00 B F T
Output Assertions
The described status of the delivery service is == offered F T T
The advertized price of the delivery service is > $0.00 F T F

A user is writing a rule for a store policy on discounted shipping. They have completed the structured natural language sentences and the truth table. Now they want to check over the rule to make sure that everything is working. What options do they have?

UI Options

Context User need UI Options
A user has authored a rule and must craft a sample is.xalgo message to send to rule reserve A user wants to test the accuracy of their rule An unpopulated text editor
a generated blob of JSON containing the field names required to generate a valid is.xalgo message
a simple form generated from the above JSON
a method to select the column the user wishes to test

Unpopulated Text Editor

The user could craft a sample is.xalgo message using a text editor. However, this would require them to know the correct structure, would be time consuming, and present a barrier to those unfamiliar with the notation.

Generated JSON

This would do a better job of testing the logic that user is interested in assessing. However, the question arises. What does that JSON look like and how is the JSON generated for testing purposes?

Using column "A" from the example rule to generate an is.xalgo message, the JSON would contain something that looks like this:

"box": [
    {"capacity": "0.2"},
    {"type": "standard"},
    {"value": "100%"}
 ]

Great. Now we have the piece of JSON that is needed to accomplish the kind of testing the user is interested in. Knowing this structure, it is possible to generalize a method for generating a JSON structure that can be used for testing valid rule permutations.

The question now becomes, how is this piece of JSON derived from the user authored rule? Let's start by comparing the structured natural language sentence to the JSON using color to distinguish parts of speech as Joseph Potvin has done in the introductory Oughtomation paper.

sentence: The measured type of box == standard

JSON:

"box": [{"type": "standard"}]

This means that the structured natural language sentences authored by the user can be used to derive JSON for testing in the following way:

Field within structured natural language sentence Role in JSON data structure
subjectNoun object
objectNounOrVerb field
objectDescriptor string

I would like some feedback on this. Are my assumptions correct? Let's open this for discussion.

The user would be able to accomplish the desired testing if presented with the following in a text editor, filling in the fields to create a specific scenario:

"box": [
    {"capacity": ""},
    {"type": ""},
    {"value": ""}
 ]

If this thinking is correct, rule testing can easily be undertaken by a user using an interface that derives a JSON data structure from the rule authoring work already done.

Form Derived from JSON

If there is an accurate way to generate a valid JSON structure then there is no reason a form could not also be used to accomplish testing. This may be beneficial for users with little technical knowledge. There may also be advantages in application performance. It is probably less computationally demanding to fill out a small form, than to display and edit a potentially large is.xalgo message.

From the above JSON it is a small step to arrive at an html form. The only missing piece of data is a human readable label for each field. This could be as simple as using the field name as the label. However, as with the UI used for authoring rules, this can be confusing. A possible solution is to use the natural language sentence restructured as a question. In the English language example this is easily accomplished.

sentence: The measured type of box == standard

label formulated as question: The measured type of box is?

I'm not sure if this is feasible in other languages, but is worth investigating alongside the ongoing inquiry into the universality of the structured natural language sentence.

Column Select

Every valid test should correspond to a column in the truth table. In theory, testing could be accomplished by running columns. However, only allowing this will make it difficult to spot if there are unaccounted for scenarios. For this reason, it is important to have a method of authoring original is.xalgo messages.

That said, there should probably be a way to select a column and use it to populate an is.xalgo test message. Similarly, it would be useful to highlight on the table editor which column corresponds to the test is.xalgo message.

Conclusion

If asked to implement testing today, I would be inclined to include elements from the last three UI options. Being able to test using a form allows for users to quickly ensure they have authored a rule as intended. However, I also think there is value in being able to see the JSON structure of an is.xalgo message, even if that is not what is being tested. For this reason, I would likely have two tabs allowing the is.xalgo message to be edited using both methods. Finally, I would incorporate the functionality described in column select.