A successful general purpose technology is one that is not only flexible, but is also easily modified to fit specific usecases. This document is intended to examine how to best accommodate specificity within the authoring process utilizing an Xalgo Rule Schema (XRS) document.
Let me start by defining the problem space.
Context | Need | Idea |
---|---|---|
The XRM authoring panel will be used by people across usecases. | For specific authorship needs (such as requiring instructions in another language, authoring a rule within a specific industry context), the user will require custom inputs and labels (dropdowns containing specific industry terms, pre-populated hinting, custom labels, improved field descriptions, etc) | Make customization of the XRM assembly panel simple by deriving input variables from a selected schema document. |
Have schemas folder in the root of the XRM dev application to store schema files. |
Over time there has been some back and forth on what specific role the schema serves and how much information it contains. As of now the schema exists as a make-shift CMS, storing information about rule fields for the XRM dev application. However, its utility is limited. The file is hidden in the back end, and it is the singular source of truth regarding rule authoring fields (i.e. changes to this document will be persistent across all authoring panels).
Unfortunately, only being able to utilize this schema limits the flexibility of the assembly panel and reduces its general purpose capacity. For example, a pencil manufacture may require specific terms to be uniform across all the rules they author. They would require a schema that the author can utilize that enforces these limitations.
Importantly, an author will likely need to make use of multiple schemas (the data structure of the schema is not changing, only the input preferences) from a single instance of XRMschemas
folder is required in the root of the XRM dev application where the developer can store schema files for use in the IDE.
In the forthcoming Oughtomation paper, Joseph Potvin outlines the specifics of the schema used in the XRM dev interface. All valid extensions of the XRM dev schema will be derived from the fields outlined here. In fact, the schema data structure will remain unaltered, with the developer adding objects to describe only input and label instructions for assembly panel.
For an example, let's return to the pencil manufacturer. As employees author rules, they begin to realize that they are using the same four UNSPSC fields over and over again. For the authors it is difficult to remember the codes, and it is frustrating to look them up repeatedly. The developer who deployed XRM for the organization decides to add a pencil-manufacturing-schema
document to the schemas
folder that will provide them with a dropdown containing the four UNSPSC codes used by the organization.
The file contains the following:
// pencil-manufacturing-schema.json
"item.classification": {
"label": "UNSPSC Code",
"description": "Choose the product involved in this rule",
"type": "array",
"options": [
{"label": "Pencil or pen grips", "value": "44121707"},
{"label": "Colored pencils", "value": "44121707"},
{"label": "Wooden pencils", "value": "44121706"},
{"label": "Mechanical pencils", "value": "44121705"}
]
}
To accommodate common uses, the input object will likely require the following arguments
label
accepts strings and is used to specify the input label.description
accepts strings and is used to specify the input description.hint
accepts strings and is used to specify the input hint.type
specifies the kind of inputs and accepts the following arguments
array
A drop downstring
Text string inputstatic
Non-editable fielddate/time
Only needed in two instances, so not sure if it should be includedrange
Only needed for "output purpose", so not sure if it should be includedcustom
It seems like a good idea to have the option include a custom input function, but may have unintended consequences. This could be used to accommodate the range and date/time inputs that are used in rare instances. It could also be used to build better hinting systems, etc.options
accepts an array of options for dropdown choices.In order to accommodate the arguments of extended rule schemas, the front end will have certain variable characteristics. Primarily an input component will be needed to take arguments from the schema about input types.
This introduces a key constraint. Before navigating to the rule assembly panel the user will need to select what schema they are using. Alternatively they may opt to use the default schema—the schema currently implemented in XRM dev. I will write up a short document thinking through the UX of this action.
This question of custom interfaces has come up in multiple conversations, both with core contributors and with those interested from a distance. Given the common nature of this problem, and the face that such a feature will make XRM dev more general purpose, I strongly advocate that this approach is used in the next generation of the software.
From the default XRM dev there should be a way to search for existing published schemas. I'm not sure how this is best accomplished. Perhaps there can be a designated repo where contributors add schemas.
It may make sense to add a field to the XRS indicating the address of the schema used. I'm not sure if this is needed, but may prove useful.
Table editor features are still not explored
The ability to link to a table for the contents of array may be useful for complex hinting.
The purpose of this document is to think though the ways in which an author would test their rule. This started out as a question of interface. However, I quickly realized that the problem I was trying to answer required understanding the relationship of the structured natural language sentence to both the is.xalgo message, and the UI methods used for testing.
The kind of testing I am investigating here has to do with table logic. I want a user to be able to check:
I am not interested in testing:
Input Conditions | A | B | C |
---|---|---|---|
The used capacity of the box is >= 0.5 | F | T | T |
The measured type of the box == standard | B | B | T |
The contained value of the box is >= $100.00 | B | F | T |
Output Assertions | |||
The described status of the delivery service is == offered | F | T | T |
The advertized price of the delivery service is > $0.00 | F | T | F |
A user is writing a rule for a store policy on discounted shipping. They have completed the structured natural language sentences and the truth table. Now they want to check over the rule to make sure that everything is working. What options do they have?
Context | User need | UI Options |
---|---|---|
A user has authored a rule and must craft a sample is.xalgo message to send to rule reserve | A user wants to test the accuracy of their rule | An unpopulated text editor |
a generated blob of JSON containing the field names required to generate a valid is.xalgo message | ||
a simple form generated from the above JSON | ||
a method to select the column the user wishes to test |
The user could craft a sample is.xalgo message using a text editor. However, this would require them to know the correct structure, would be time consuming, and present a barrier to those unfamiliar with the notation.
This would do a better job of testing the logic that user is interested in assessing. However, the question arises. What does that JSON look like and how is the JSON generated for testing purposes?
Using column "A" from the example rule to generate an is.xalgo message, the JSON would contain something that looks like this:
"box": [
{"capacity": "0.2"},
{"type": "standard"},
{"value": "100%"}
]
Great. Now we have the piece of JSON that is needed to accomplish the kind of testing the user is interested in. Knowing this structure, it is possible to generalize a method for generating a JSON structure that can be used for testing valid rule permutations.
The question now becomes, how is this piece of JSON derived from the user authored rule? Let's start by comparing the structured natural language sentence to the JSON using color to distinguish parts of speech as Joseph Potvin has done in the introductory Oughtomation paper.
sentence: The measured type of box == standard
JSON:
"box": [{"type": "standard"}]
This means that the structured natural language sentences authored by the user can be used to derive JSON for testing in the following way:
Field within structured natural language sentence | Role in JSON data structure |
---|---|
subjectNoun | object |
objectNounOrVerb | field |
objectDescriptor | string |
I would like some feedback on this. Are my assumptions correct? Let's open this for discussion.
The user would be able to accomplish the desired testing if presented with the following in a text editor, filling in the fields to create a specific scenario:
"box": [
{"capacity": ""},
{"type": ""},
{"value": ""}
]
If this thinking is correct, rule testing can easily be undertaken by a user using an interface that derives a JSON data structure from the rule authoring work already done.
If there is an accurate way to generate a valid JSON structure then there is no reason a form could not also be used to accomplish testing. This may be beneficial for users with little technical knowledge. There may also be advantages in application performance. It is probably less computationally demanding to fill out a small form, than to display and edit a potentially large is.xalgo message.
From the above JSON it is a small step to arrive at an html form. The only missing piece of data is a human readable label for each field. This could be as simple as using the field name as the label. However, as with the UI used for authoring rules, this can be confusing. A possible solution is to use the natural language sentence restructured as a question. In the English language example this is easily accomplished.
sentence: The measured type of box == standard
label formulated as question: The measured type of box is?
I'm not sure if this is feasible in other languages, but is worth investigating alongside the ongoing inquiry into the universality of the structured natural language sentence.
Every valid test should correspond to a column in the truth table. In theory, testing could be accomplished by running columns. However, only allowing this will make it difficult to spot if there are unaccounted for scenarios. For this reason, it is important to have a method of authoring original is.xalgo messages.
That said, there should probably be a way to select a column and use it to populate an is.xalgo test message. Similarly, it would be useful to highlight on the table editor which column corresponds to the test is.xalgo message.
If asked to implement testing today, I would be inclined to include elements from the last three UI options. Being able to test using a form allows for users to quickly ensure they have authored a rule as intended. However, I also think there is value in being able to see the JSON structure of an is.xalgo message, even if that is not what is being tested. For this reason, I would likely have two tabs allowing the is.xalgo message to be edited using both methods. Finally, I would incorporate the functionality described in column select.