You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

15 KiB

Generate differs in 10 and 100 runs

  • New "new" format
  • Init must check if both files are the same; if so, new format; if not, new new format.
  • Validation needs pairs of files (Generated/Reference); assume short is always present, long is optional
  • "update" can read the Validation directly
  • If the generated file doesn't exist in the long run, ask again
  • In "init", update the reference (this should be done inside Generator)\

0.20 Documentation

  • Rundir
  • Reduction
  • Small-Fail/Large-Fail
  • What is the absolute and relative differences

Not reducing

  • ===THIS MAY BE UNNECESSARY WITH THE NEW CONFIG FORMAT===
  • caseSetupDict.static, maybe?
  • When there is no reduction, there is no need to build the short/long run things
    • The old format supports that, we just need to figure out when there are no reductions in init to generate the old format.
    • There is also a suggestion to change the run-dir when there is no reduction, which I'm not sure if possible due the other of "Expander"/"Reducer"...
    • Also, would need to have, in meta information, to have both (one option) rundirs.
      • This would help the other point of keeping the run-dir static for init, update and run.
      • Expanding all run-dirs at the start would save a few runs too.
  • In the pipeline, the sequence is Expander, Reducer, Loader.
    • The Expander doesn't know if the case can be reduced or not, and can't make a judgement if we need 1 or 2 run-dirs.
    • Maybe Reducer could have a "reducer probability" kinda of thing that Expander could call to find out the number of run-dirs.
      • Function returns: caseSetupDicts that can be reduced but are not continuations, caseSetupDicts that are continuations and caseSetupDicts that can't be reduced. I'd like to make it an iterator, but I guess returning a Vec is good enough (specially since we need to sort the files in the file system to find out the continuations).
      • Problem: Variations need to be sorta-kinda expanded, 'cause they have more/replace caseSetupDicts
        • So Reducer needs to know about variations and the merge space from the Expander, and Expander needs to know about reductability from the Reducer.
        • (All this to know how many run-dirs we need...)
        • Default doesn't have long run (all caseSetupDicts are static); variation have short and long run (at least oneCaseSetupDict is reducible).
        • Rename the run-dir after expansion, if found anything that can be reduced?
        • Expand the short and long run-dirs always, and then just ignore/delete the long one if the case isn't reducible?
          • We need to remove it if it doesn't have a long validation either.
    • Should we follow the same idea for controlDicts?
  • Need to document this properly #0.20 Documentation

system/validationDict

Idea for a Foam-formated file for Verify

// Execution is run in sequence: If the first run ("quick") fails, then the
// second run ("short") is run, and if that fails, the third is run ("long"),
// and so on. On the other hand, the execution of the runs stops on the first
// full success.
//
// This example is a bit of a stretch -- why would be want to run the same case
// 4 different times? -- but it shows that, if that necessity appears to run
// any different number of runs in the future, we can support it. It also shows
// that one can have mixed reduced and non-reduce runs (e.g., there could be
// just one run without reduction, or just one reduced run, and so on.)
// We could also add support in Verify to only execute runs with a specific
// name, allowing to run all examples in their "infinite" (non-reduced) mode
// (as long as they all name their runs with no steps "infinite", that is).
//
// Names are free form, and we could use them to defined the run-dir, e.g., 
// "short" validation will be run in ".run-short", "long" in ".run-long" and
// so on.

runs 
(
    // This is a quick test: The example is reduced to just 2 timesteps, and
    //  we check the resulting file against its MD5 hash.
	quick {
		steps 2;
		// Any other key name that it is not reserved ("steps", "continuations")
		// represents a file to be compared.
		checks
		(
			{
				result "postProcessing/2/qux";
				md5 "123123";
			}
		)
	}
	
	// In case the MD5 fails, the case is run again, but this time reduced to
	// 10 timesteps. This run also includes continuations, by changing the 
	// listed files to make one follow the timesteps of the previous. The run 
	// will fail if relative or absolute differences are above 0.
	short {
		// ********** Maybe future improvements **********
		execute previousFails;  // previousFails, previousPass, always
		workspace reuse;        // restart (default), reuse
		changes {
			"foamFile" {
				"system/controlDict/timestep" 0.2;
			}
		}
		// ***********************************************
		steps 10;
		// This makes the continuations explicit, by specificing files that
		// form a single run.
		continuations (
			"system/caseSetupDict.initial"
			"system/caseSetupDict.continuation1"
			"system/caseSetupDict.continuation2"
		);
		checks
		(
			{
				result "postProcessing/10/foo";
				reference "verification/10/foo";
				variables "phi" "Tmean";
				// No tolerances mean "it will fail if the values are 
				// not the same as they are in the reference".
			}
			{
				// A different file, with different variables.
				result "postProcessing/10/bar";
				reference "verification/10/bar";
				variables "Tmin" "Tmax";
			}
		)
	}
	
	// If in 10 timestemps the values do not match the reference file, a 100
	// timesteps run is done. There is no continuation, and for this run to 
	// fail, both absolute and relative tolerances must be above the designed 
	// threshold.
	long {
		steps 100;
		// This file only appears in the 100 runs, so we can't have a global
		// list of files not attached to steps (although it is the same "foo"
		// from 10 steps).
		checks
		(
			{
				result "postProcessing/100/foo";
				reference "verification/100/foo";
				variables "phi" "Tmean";
			}
			{
				result "postProcessing/100/bar";
				reference "verification/100/bar";
				variables "Tmin" "Tmax";
			}
		)
	}
	
	// If the 100 timesteps fails, then we run the example a 4th time, this
	// time without any reductions (the "steps" property is not set in this 
	// run). There are no continations either.
	"run till completion" {
		checks
		(
			{
				result "postProcessing/20000/baz";
				reference "verification/20000/baz";
				variables "integral";
				absolute 20;
				relative 20;
				// no operator means OR, so example will fail if the
				// absolute different is above 20 OR the relative
				// difference is above 20.
			}
		)
	}
);

// This is used for filtering.
tags "GIB" "AES" "compressible";

// Only present if the example can't be run in some platform.
unstable true;  
operatingSystem windows; // valid values: "windows", "linux", "all"  
reason "Allrun uses Python, and Python isn't usually available on Windows";

Pest:

foam = { SOI ~ entry ~ EOI }

// general stuff
quote = _{ "\"" }
semicolon = _{ ";" }
braces_open = _{ "{" }
braces_close = _ { "}" }
parentheses_open = _{ "(" }
parentheses_close = _{ ")" }
specials = { semicolon | braces_open | braces_close | parentheses_open | parentheses_close | WHITESPACE }
WHITESPACE = _{ " " | "\t" | NEWLINE }
COMMENT = _{ "//" ~ (!NEWLINE ~ ANY)+ }

// somewhat complex structures
single_word = @{ (!specials ~ ANY)+ }
quoted_word = @{ quote ~ (!quote ~ ANY)* ~ quote }
keyword = { quoted_word | single_word }
value = { keyword }

// main foam stuff
entry = { (dictionary | list | attribution)* } 

attribution = { keyword ~ value+ ~ semicolon }
dictionary = { keyword ~ braces_open ~ entry ~ braces_close }
list = { keyword ~ parentheses_open ~ value+ ~ parentheses_close ~ semicolon }

Parsed:

- foam
  - entry
    - list
      - keyword > single_word: "variables"
      - value > keyword > quoted_word: "\"phi\""
      - value > keyword > quoted_word: "\"meanT\""
    - list
      - keyword > single_word: "runs"
      - dictionary
        - keyword > single_word: "quick"
        - entry
          - attribution
            - keyword > single_word: "steps"
            - value > keyword > single_word: "2"
          - attribution
            - keyword > single_word: "generatedFile"
            - value > keyword > quoted_word: "\"postProcessing/2/blah\""
          - dictionary
            - keyword > single_word: "failIf"
            - entry > attribution
              - keyword > single_word: "md5Differs"
              - value > keyword > quoted_word: "\"123123\""
      - dictionary
        - keyword > single_word: "short"
        - entry
          - attribution
            - keyword > single_word: "steps"
            - value > keyword > single_word: "10"
          - attribution
            - keyword > single_word: "generatedFile"
            - value > keyword > quoted_word: "\"postProcessing/10/blah\""
          - attribution
            - keyword > single_word: "referenceFile"
            - value > keyword > quoted_word: "\"verification/10/blah\""
          - list
            - keyword > single_word: "continuations"
            - value > keyword > quoted_word: "\"system/caseSetupDict.initial\""
            - value > keyword > quoted_word: "\"system/caseSetupDict.continuation1\""
            - value > keyword > quoted_word: "\"system/caseSetupDict.continuation2\""
      - dictionary
        - keyword > single_word: "long"
        - entry
          - attribution
            - keyword > single_word: "steps"
            - value > keyword > single_word: "100"
          - attribution
            - keyword > single_word: "generatedFile"
            - value > keyword > quoted_word: "\"postProcessing/100/blah\""
          - attribution
            - keyword > single_word: "referenceFile"
            - value > keyword > quoted_word: "\"verification/100/blah\""
          - dictionary
            - keyword > single_word: "failIf"
            - entry
              - attribution
                - keyword > single_word: "absolute"
                - value > keyword > single_word: "10"
              - attribution
                - keyword > single_word: "relative"
                - value > keyword > single_word: "10"
              - attribution
                - keyword > single_word: "operator"
                - value > keyword > single_word: "and"
      - dictionary
        - keyword > single_word: "infinite"
        - entry
          - attribution
            - keyword > single_word: "generatedFile"
            - value > keyword > quoted_word: "\"postProcessing/20000/blah;\""
            - value > keyword > single_word: "referenceFile"
            - value > keyword > quoted_word: "\"verification/20000/blah\""
          - dictionary
            - keyword > single_word: "failIf"
            - entry
              - attribution
                - keyword > single_word: "absolute"
                - value > keyword > single_word: "20"
              - attribution
                - keyword > single_word: "relative"
                - value > keyword > single_word: "20"
    - list
      - keyword > single_word: "tags"
      - value > keyword > quoted_word: "\"GIB\""
      - value > keyword > quoted_word: "\"AES\""
      - value > keyword > quoted_word: "\"compressible\""
    - dictionary
      - keyword > single_word: "unstable"
      - entry
        - attribution
          - keyword > single_word: "operatingSystem"
          - value > keyword > single_word: "windows"
        - attribution
          - keyword > single_word: "reason"
          - value > keyword > quoted_word: "\"Allrun uses Python, and Python isn't usually available on Windows\""
  - EOI: ""

Migration plans?

  • Side pipeline?
    • Current pipeline still exists, a new pipeline is build next to it with the new actors, with the new structures and "finder" sends requests depending on found files.
    • What about reporter?
  • No actors?
    • New design is in early stages and I'm not sure how long till they are both feature compatible.
    • Also, how about the current tests?

Intermediate format?

  • We have nothing
  • We find a dictionary
    • That's a run, so we need to keep this in a list
      • In the run, we need to capture each content
        • It is "steps"? Int
        • It is 'failIf'? process dic
      • Now how it is "run"?
      • How do we fill the blanks while loading?

Logos

use logos::Logos;

#[allow(dead_code)]
#[derive(Logos, Debug)]
#[logos(skip r"[ \t\n\r]")]
pub(crate) enum Token<'a> {
    #[regex(r#"\/\*[^\/\*]*\*\/"#, |lex| lex.slice())]
    MultilineComment(&'a str),

    #[regex(r#""[^"]+""#, |lex| lex.slice().trim_start_matches('"').trim_end_matches('"'))]
    #[regex("[a-zA-Z0-9]+", |lex| lex.slice())]
    Keyword(&'a str),

    #[regex(r#"//[^\n]*"#, |lex| lex.slice())]
    Comment(&'a str),

    #[token(";")]
    End,

    #[token("{")]
    DictStart,

    #[token("}")]
    DictEnd,

    #[token("(")]
    ListStart,

    #[token(")")]
    ListEnd,
}
variable "value with quotes";

Keyword("variable") => Keyword("value with quotes) => End

dict { dict { var value; }}

Keyword("dict") => DictStart => Keyword("dict) => DictStart => Keyword("var") => Keyword("value) => End => DictEnd => DictEnd

So:

  • Keyword (always)
  • if next is:
    • DictStart: Start dict
    • ListStart: Start List
    • Keyword: Grab keyword, make valuelist

loop {

  • Grab keyword (key)
  • Grab next element:
    • Comment? None
    • MultilineComment: None
    • ListStart: process_list (consume token, anyway)
    • ListEnd: ERROR
    • DictStart: process_dictionary (consume token, anyway)
    • DictEnd: Empty dictionary, ERROR
    • Keyword: assume values, start Vec, add keyword, consume till "End"
    • End: ERROR
  • Grab next element:
    • None: Complete
    • DictEnd: This dictionary is complete, return it
    • Anything else: }

Post lexer work

  • grep filter on expansion
  • Runner with comparer
    • Spawn tasks? How to join?
    • Also, since the executor follows one run, and the comparer follow that run, does it matter using tasks?
  • Build "rundir" when loading Verification
  • Variations?!?!
    • Expand the whole case at the start.
    • Load verification from expanded dirs.
      • This is what the "Loader" was doing before, which I inadvertently merged with the verification loader.
    • Order is mixed: The whole example can be expanded into their runs, but the execution is per non-expanded content (default, variations)
    • Need to send, into the pipeline, the default, variations and the replicated Verification for each.
      • Problem: The variation may contain different steps, anyway. So need to "expand" without having the verification info beforehand.
    • What if:
      • Expand default run into .run-default
      • Expand variations into .run-[variationname]
      • Inside each of those, expand the case (default won't expand, but still....)
      • We could say it is not possible to add a new verificationDict into variations, but I do like the idea of having a different content for it.
        • Ok, ignoring this for now 'cause it fucks up my thinking.
        • Expand by run on default, send variations down the pipeline\

Named dicts

list ( name1 { inner 1; } name2 { inner 2; } )
Dictionary({
	"list": [
		List(
			[Value("name1"), Dictionary({"inner": [Value("1")]}),
			 Value("name2"), Dictionary({"inner": [Value("2")]})
			]
		)
	]
})