Browse Source

Move to baangt-builds

bernhardbuhl 4 years ago
parent
commit
c2c5913419
43 changed files with 1 additions and 1498 deletions
  1. 0 9
      MakePackage.sh
  2. 0 38
      TechSpecs/01 in creation/Cleanup.md
  3. 0 35
      TechSpecs/01 in creation/ProxyUI.md
  4. 0 0
      TechSpecs/01 in creation/__ignore.txt
  5. 0 62
      TechSpecs/01 in creation/openFiles.md
  6. 0 56
      TechSpecs/30 Ready/randomValues.md
  7. 0 79
      TechSpecs/60 InProgress/NestedLoopsOfData.md
  8. 0 78
      TechSpecs/60 InProgress/SaveResults2Database.md
  9. 0 44
      TechSpecs/60 InProgress/UI_ShowStatusOfTestrun.md
  10. 0 0
      TechSpecs/60 InProgress/__ignore.txt
  11. 0 8
      TechSpecs/70 Implementation done/CREATE_EXECUTABLE.md
  12. 0 64
      TechSpecs/70 Implementation done/PDFComparisonMicroService.md
  13. 0 149
      TechSpecs/70 Implementation done/PYSIMPLEGUI_UI_ADDITONS_1.md
  14. 0 0
      TechSpecs/70 Implementation done/__ignore.txt
  15. 0 126
      TechSpecs/99 Done/CONCEPT_DB_UI.md
  16. 0 41
      TechSpecs/99 Done/DATABASE_AND_UI.md
  17. 0 6
      TechSpecs/99 Done/DOWNLOAD_BROWSER_DRIVERS.md
  18. 0 64
      TechSpecs/99 Done/KatalonRecorderImporter.md
  19. 0 16
      TechSpecs/99 Done/LogNetworkTraffic.md
  20. 0 32
      TechSpecs/99 Done/MULTIPROCESSING_REFACTOR.md
  21. 0 15
      TechSpecs/99 Done/Plugin-Readyness.md
  22. 0 13
      TechSpecs/99 Done/READ_THE_DOCS.md
  23. 0 54
      TechSpecs/99 Done/ResultDirectoriesRelocate.md
  24. 0 18
      TechSpecs/99 Done/TECHSPEC_RUNLOG.md
  25. 0 10
      TechSpecs/99 Done/UnitTests_Part1.md
  26. 0 39
      TechSpecs/99 Done/Versioning.md
  27. 0 62
      TechSpecs/99 Done/XLSX2BaangtDBAndViceVersa.md
  28. 0 62
      TechSpecs/99 Done/addressCreation.md
  29. 0 26
      TechSpecs/99 Done/assertions.md
  30. BIN
      TechSpecs/ScreenshotWindowsAfterLatestCommit.png
  31. 0 9
      TechSpecs/_ProcessWithContributers.md
  32. 0 85
      analyzer_example.py
  33. 1 1
      baangt/base/PathManagement.py
  34. 0 0
      examples/example_googleImages.json
  35. 0 29
      execMac.sh
  36. 0 23
      execUbuntu.sh
  37. 0 17
      execWindows.bat
  38. 0 0
      old_Paths.json
  39. 0 40
      requirements_dev.txt
  40. 0 5
      run_test.sh
  41. 0 1
      testrun.json
  42. 0 42
      windows/baangtSetupWindows.iss
  43. 0 40
      windows/baangtWindows.spec

+ 0 - 9
MakePackage.sh

@@ -1,9 +0,0 @@
-rm -R dist/baangt-*
-rm -R dist/baangt
-rm -R dist/baangtIA
-rm -R dist/baangt.app
-rm -R dist/baangtIA.app
-python3 setup.py sdist bdist_wheel
-python3 -m pip install --upgrade twine
-python3 -m twine upload dist/*
-rm -R build/

+ 0 - 38
TechSpecs/01 in creation/Cleanup.md

@@ -1,38 +0,0 @@
-# Aim
-
-After longer or intense use of baangt a lot of files accumulate, that the user might no longer need. 
-
-This functionality shall provide a convenient way to get rid of those files.
-
-- Logs
-- Screenshots
-- Temp Downloads
-
-User shall have an option to state age of the files, that shall be deleted. 31 is default. 0 will delete all files.
-
-# Functional specification
-
-We need a class, integration in baangt CLI and baangt UI.
-
-## CLI
-* New parameter ```--cleanup <days>```.
-* Then call class Cleanup
-
-## UI
-* New button ```cleanup```. Popup to ask for how many days. Default 31.
-* Then call class Cleanup
-
-## Class
-* Have a method for each type of files:
-    * Logs
-    * Screenshots
-    * Temp Downloads
-* Provide method ``clean_all``, which calls all the other methods.
-
-# DoD
-* Class implemented
-* CLI method implemented and tested
-* UI button implemented and tested
-* Unit-Tests for class created
-* Documentation updated
-

+ 0 - 35
TechSpecs/01 in creation/ProxyUI.md

@@ -1,35 +0,0 @@
-# Aim
-
-Have a UI for Proxy server maintenance. Users should not have to work inside JSON-files.
-
-# Prerequisites
-
-Proxy-Servers are currently in JSON File proxies.json in /baangt. 
-
-# Implementation
-
-In baangt UI (pyQT5) create a new action button "Proxies". 
-
-* On click of this button, open a new window with a table display, header for the columns of the table and buttons:
-    * Button "OK" - save result to JSON and close the screen. Return back to main screen
-    * Button "Exit" - don't save.
-* Load contents of proxies.json.
-    * The columns should arrange themselves dynamically according to the attributes in the JSON-Entries. 
-      Currently those attributes (all string unless otherwise mentioned) are:
-         * IP
-         * Port
-         * Type (either SOCKS or HTTPS - but UI shall not check validity)
-         * User
-         * Password
-         * UsedCount (int)
-         * ErrorCount (int)
-    * Next to each line, there shall be a button "Test". Alternatively a marked line from the grid shall be used and
-      Test-Button shall be on the bottom of the window.
-         * If the button is clicked, call method "testProxy" of class "ProxyRotate". If result is ```true``` display
-           "Proxy test successful". If result is not boolean display f"Proxy not reached: {result_from_call}"          
-             
-# DoD
- 
-* Part of the UI delivered in a separate branch to gig repository on Gogs-Server
-* Unit-Test coverage (in folder /tests) of 80% (for all committed/changed methods) 
-* no critical linter errors or warnings (PEP-8 conformity of the code)

+ 0 - 0
TechSpecs/01 in creation/__ignore.txt


+ 0 - 62
TechSpecs/01 in creation/openFiles.md

@@ -1,62 +0,0 @@
-# Current situation
-
-The output files and input files used in baangt have to be opened by explorer/finder. They are not available directly
-from within the app. 
-
-We have 4 types of files:
-* Simple Format XLSX: Testrun-definition and data in one file.
-* Full Format 
-    * XLSX: Testrun-definition with multiple tabs. Data-sheet is a separate XLSX, that can be 
-      found via sheet "TestCaseSequence" in column "TestDataFileName". There can be multiple rows.
-    * JSON: Testrun-definition in JSON-Format.
-* Result-File of a test run
-* Log-File of a test run
-
-# Aim
-
-Enable the user to open those files directly, not need Explorer/Finder or Python IDE to open the files.
-
-## TestRun Definition and Data files:
-
-* After the TestRun was selected, provide a button with icon https://material.io/resources/icons/?icon=open_in_new&style=baseline
-  (or similar) in the line of TestRun-Dropdown.
-* When the user clicks on it, open the file in the default application for the file type.
-* If the file has a worksheet "TestCaseSequence", read through all the lines in column "TestDataFileName" and open those
-  too. If one/more of the file(s) can't be found, log the problem and continue.
-  
-## Result file:
-
-* After the testrun was finished, provide a button with icon https://material.io/resources/icons/?icon=open_in_new&style=baseline
-  (or similar) in the statistics area of the UI main window next to the output filename.
-* When the user clicks on it, open the file in the default application for the file type.
-
-## Log file:
-
-Same as result file (show Logfile-Name, show Icon, enable click).
-
-# Implementation 
-## UI:
-* Implement as described above
-
-## baangt.base:
-* Have a new class "FilesOpen" with methods:
-    * openTestRunDefinition(filenameAndPath: str)
-    * openResultFile(filenameAndPath: str)
-    * openLogFile(filenameAndPath: str)
-
-Inside this class handle finding the files and OS-specific opening of the files.
-
-# DoD
-
-## UI:
-* Button appears/is active when TestRun is selected, can be clicked and calls the method "openTestRunDefinition" 
-  of the class FileOpen with str(filenameAndPath)
-* Buttons appear/are active when TestRun was finished, can be clicked and call the method "openResultFile" or 
-  "openLogFile" of the class FileOpen with str(filenameAndPath)
-* Button for TestRun-File display disappears/is inactive when no TestRun was selected
-* Button for Result and Log-File disappears/is inactive when no TestRun was executed yet or when a TestRun is currently 
-  active
-  
-## baangt.base:
-* Implemented and unit-tested on at least 2 OS (MacOS/Linux or Windows/Linux)
-* Unit-Tests written. Coverage at least 80%. Example for test data in /tests/*-Folder

+ 0 - 56
TechSpecs/30 Ready/randomValues.md

@@ -1,56 +0,0 @@
-# Provision of intelligent random values
-
-## Aim
-
-In Test data creation despite having often very specific test data requirements to facilitiate reproducable outcome both
-in exploratory testing but also in regression testing sometimes we want random values.
-
-Example:
-* Names of business partners
-* Random text
-
-We want to provide an easily accessible feature for users of ```baangt``` to deal with this requirement without needing
-to develop code.
-
-## Implementation
-
-Implement a class (RandomValues), activity (Random) in API and SimpleFormat as well as variable replacement (``$(RANDOM)``). 
-The implementation of the activity and variable replacement needs to be done in ``TestStepMaster``.
-
-### Class RandomValues
-
-Parameters for the class should be none in init, and the following in Method ``retrieveRandomValue``:
-* RandomizationType (default: "String". Other values: Int)
-* Min (default: 3)
-* Max (default: 10)
-
-### Activity "Random"
-
-Parameters of the activity ``Random`` are defined in field ``value`` with JSON-Format (e.g. ``{Type:String,Min:5,Max:40}``) 
-and must be mapped to the method parameters (Type->RandomizationType, Min->Min, Max->Max).
-
-In this case the return value must be stored to testDataDict into the variable defined in column ``value2``
-
-### Variable replacement
-
-When ``$(RANDOM)`` is used in e.g. Activitiy ``SETTEXT`` in the column "Value", then we should execute method
-``retrieveRandomValue``. The logic must also be implemented in ``TestStepMaster`` in method ```replaceVariables```. 
-
-Also in variable replacement additional parameters can be mentioned, e.g. ``$(RANDOM{Min:10,Max:60})`` 
-
-### Randomization
-Implement one String and one Integer generation of random values and return according to length definition in min/max.
-
-### Storing 
-In Activity "Random" we need to store the resulting value in the field given in ```value2``` from the TestStep.
-
-## Example File
-An example File with explanations can be found in folder ``Examples``, filename ``Random.xlsx``
-
-## Scope / DOD (including effort estimation)
-* Implementation provided (in line with coding standards and PEP-8 conformity) (2 hours)
-* Functional test executed and passed (theoretically if values from ``Random.xlsx`` work, this should be enough) (1 hour)
-* Enhance existing documentation in docs-Folder in RST-Format (0,5 hours)
-    * in this case in simpleExample.rst and SimpleAPI.rst
-* Unit-Tests in tests-folder providing reasonable coverage (e.g. 80%) of the provided functionality (1 hour)
-* git commit to feature branch and pull request on Gogs created (no additional effort)

+ 0 - 79
TechSpecs/60 InProgress/NestedLoopsOfData.md

@@ -1,79 +0,0 @@
-# Current situation
-
-```baangt``` provides a simple way to run test automation with as little as one Microsoft Excel File (of course with
-reduced functionality, compared to full Excel format or ``baangtDB``). This works well as long as data for one test case
-can be read from one line in the data-file or data-tab (simple format).
-
-When we have nested data, it's not easily possible to process it, for example:
-
-* Sales order -> currently possible with baangt simple format. All good
-* Line items -> currently only possible by extending the fields of sales order with e.g. ```item_01_materialnumber```, 
-  ``item_01_quantity``, ``item_02_materialnumber``, ``item_02_quantity`` and so on.
-* Schedule items (e.g. 10 pieces in June, 20 pieces in July, etc.) -> with the current data structure not possible.
-
-# functional requirement
-
-We need to provide a simple way to deal with nested data structures like in the example above, without the user having
-to code. 
-
-# Solution approach
-
-## Provide JSON-Arrays in Excel cells.
-
-Enable ``baangt`` to read data from cells as arrays: 
-```
-[{line-item-number: 1,
-  material-number: 'baangtSticker'
-  quantity: 10
-  schedule: [{
-    date: '2020-03-20',
-    quantity: 5},
-    {
-    date: '2020-05-01',
-    quantity: 5}]
- },
- {line-item-number: 2,
-  material-number: 'baangtCourse'
-  quantity: 5
- }]
-```
-
-## Provide possibility to write in separate tab (and interpret Tab into JSON)
-As the above mentioned way is definitely convenient to work with in Python, but is not at all user friendly, we should
-provide a function to read structured data from another Excel-Tab.
-
-If the column name of the current Excel-Column is identical to a tab-name in the same XLSX, read the tab and move all
-data from there, that is identical with the test case number, into the cell in the test case data as JSON (see above).
-
-## New command in testStepMaster.py
-
-We'll need a new command ``repeat``. Value1 is the column/field-name in testDataDict. The implementation of this command
-must loop over the json-array and process each test step until it reaches (the also new command) ``repeat-done``. It 
-shall automatically create a variable ``_enum`` into each loop (also nested) to know the current position.
-
-Nested ```repeat``` are possible as a JSON-String may include further lists of Dicts. For any command executed within 
-the ``repeat`` the ``Value1`` would read: ``<column_name>.<dict_Key>`` and for a nested entry: ``<column_name>.<dict_key>.<dict_key>``.
-
-## Export data 
-* Export format in XLSX shall contain the nested data as tabs with reference to test case line.
-* Export format in Database shall store the data in table ``<stage>_<object>`` (in above example e.g. ``test_schedule``)
-
-### Example:
-* REPEAT items
-* SETTEXT <some_xpath_to_material_field$(items._enum)> $(items.materialnumber)
-* SETTEXT <some_xpath_to_quantity_field$(items._enum)> $(items.quantity)
-* REPEAT items.schedule
-* SETTEXT <some_xpath_to_schedule_line$(items.schedule._enum)_date> $(items.schedule.date)
-* SETTEXT <some_xpath_to_schedule_line$(items.schedule._enum)_quantity> $(items.schedule.quantity)
-
-# DoD (including effort estimation (18 hours))
-
-* Implementation of JSON-Loops in TestStepMaster done and tested (2 hours)
-* Implementation of commands ```repeat``` and ``repeat-done`` incl. ``_enum`` (2 hours)
-* Implementation of Excel-Import from tabs into JSON-Field done and tested (2 hours)
-* Implementation of Excel-Export and database inserts for nested data (3 hours)
-* One working example file in /examples using 2 levels of nested data without other tabs (2 hours)
-* One working example file in /examples using 2 levels of nested data from other tabs (2 hours)
-* Updated documentation in /docs-Folder (1 hour)
-* Unit-Test coverage (in folder /tests) of 80% (for all committed/changed methods) (4 hours)
-* no critical linter errors or warnings (PEP-8 conformity of the code) (no additional effort)

+ 0 - 78
TechSpecs/60 InProgress/SaveResults2Database.md

@@ -1,78 +0,0 @@
-# Current situation
-
-Currently all test runs are already saved in a database. This happens in baangt.base.ExportResults.ExportResults.py in
-method exportToDataBase.
-
-Saving happens for:
-* Testrun name
-* Logfile name
-* Count of successful test cases
-* Count of not successful test cases
-* GlobalVars (JSON-String)
-* Name of data file
-
-This works well, but is not enough.
-
-# Goal
-We need to have more details about the test runs in the database, so that we can do more analytics, for instance:
-
-* Comparison of durations of the same test case throughout an extended period of time
-* Detect problematic services during longer running test cases
-
-The above mentioned analytics is **not** part of this TechSpec.
-
-# Implementation
-
-* Extend data persistence to save all data, that is currently exported in XLSX-Format also for database
-    (not as JSON into one field but as structured tables with columns).
-    * For that to work (and for other purposes) each object (TestRun, TestcaseSequence, TestCase) must have a unique key, that can be saved in the database in order to have key-columns for the database tables.
-        
-         Each table will have the UUID as key field.
-         
-    * Also testDataDict for each test case needs to be stored. Info: During a test run fields can be added. So before
-        saving testDataDict to an existing table of this testdata object, the structure needs to be checked and may need
-        additional fields than in a previous execution.
-    * Export UUID of test run (in summary tab) and test cases (in tab Output, Timing and Network) in XLSX-Export
-         
-* Add capability to save test case result data in additional export format.
-    * When a column of testDataDict is in format ``$<objectname>-<fieldname>``:
-        * For XLSX-Output:
-           * Add a new tab to MS Excel and fill in the data into this tab. Tab-name is ``<stage>_<objectname>``. 
-           * Create columns in the header: ``<stage>``, ``<uuid>`` of the test case, ``[<fieldname>]``
-           * For each entry in testDataDict write ``stage``, ``uuid``,``<fieldname>`` into the appropriate column 
-             in this Excel-Tab.
-           * If there are more fields with the same ``$<objectname>``, add columns for each ``<fieldname>``.
-        * Into the database:
-            * look for a table ``<stage>-<objectname>``.
-            * If it exsits, check if ``[<fieldname>]`` are found in the structure
-            * If it doesn't exist, create the table with structure ``uuid``, ``[<fieldname>]``
-            * If needed, extend structure
-            * Add also ``uuid`` of the test case.
-            * Append entries similarly to XLSX above.
-            
-_An example_: 
-
-For instance, if we have a column ``$customer-customernumber`` and the value of ``stage`` is ``Dev``. Then create a 
-worksheet ``Dev_Customer``. The new Tab has columns ``stage``, ``UUID``, ``customernumber``. If there's another field
-``$customer-customername``, then add one more column ``customername`` to the output tab ``customer``. 
-            
-This will lead to double saving of the same data, once in testDataDict (e.g. in XLSX in Tab Output) and once in a separate
-tab ``<stage>_<objectname>``. The reason for that is to provide a simpler way for users to extract such data from multiple
-XLSX-Files into one database, in case they don't want to or can't use the built-in database storage.
-
-# Examples for using the additionally stored data:
-* Creating master data records (e.g. customers) in a certain stage (e.g. Test, Pre-Quality, Final-Quality, Migration, etc.)
-  and re-using this data in other test cases (e.g. customer orders). In this case the user would first run the test cases
-  that create master data, then Cross-reference the results in their customer order data records for this stage,
-  and finally run those test cases (the cross-referencing happens in XLSX, for instance using VLOOKUP or VBA). 
-            
-# DoD (incl. rough effort estimation)
-
-* XLSX and Database are populated with the above mentioned data (new fields, new Tables/Tabs) (6-8 hours)
-* STAGE-variable existing in all TestRuns (default value = ``Test``). Hint: Can be set in HandleDatabase.py in __init__ 
-  as e.g. GC.EXECUTION_STAGE = GC.EXECUTION_STAGE_TEST (0,5 hours)
-* One working example file in /examples using multiple fields in format ``$<objectname>-<fieldname>`` including 
-  output file committed to examples-folder. (2 hours)
-* Updated documentation in /docs-Folder (1 hour)
-* Unit-Test coverage (in folder /tests) of 80% (for all committed/changed methods) (4 hours)
-* no critical linter errors or warnings (PEP-8 conformity of the code) (no additional effort)

+ 0 - 44
TechSpecs/60 InProgress/UI_ShowStatusOfTestrun.md

@@ -1,44 +0,0 @@
-# Situation (Munish)
-
-Right now when you start ``baangt`` interactive starter, choose a testrun and globals file and execute the testrun there's
-a log either in the console or in your IDE (depending from where you start it), but directly in the UI you don't see
-anything except an hourglass.
-
-# Aim
-
-## Statistics
-In the right area of the UI (under the logo), we should have a section showing statistical information (number of testcases
-to execute, count of testcases executed, count of testcases successful, count of testcases paused, count of testcases failed
-and overall duration since test case start).
-
-Under this, the count of executed TestCaseSequences, TestCases, TestStepSequences and TestSteps should be displayed.
-
-## Logs
-Additionally in the lower part of the window a section should appear (and disappear when the test run is finished), that
-shows the Info/Error/Warning-Messages that are logged.
-
-# Additional information
-
-Apart from using these statistical information in the current UI, the flask-Implementation will also use the same information.
-To be efficient, we'll need to have data gathering in one separate class, that can be used by current UI as well as flask. 
-Current flask implementation is in directory flask.
-
-# Implementation
-
-Majority of the data points can be found/derived from ``baangt.base.TestRun.TestRun`` in the method ``executeDictSequenceOfClasses``
-(see documentation there). For single runs (one browser or one API-Session), that should work fine.
-
-Parallel runs are different/difficult (``TestCaseSequence.TestCaseSequenceParallel``) as they run in a different Python instance 
-than the UI. This means, that during runtime of the parallel sequence a Queue must be written from the parallel runs
-and read from within the UI.
-
-Most probably to display the logs inside UI the logger needs to be changed. It's defined in ``baangt.__init__.py``
-
-## Scope / DOD
-This task is considered completed after:
-* Implementation provided (in line with coding standards and PEP-8 conformity)
-* Functional test executed and passed
-* Existing functionality not compromised - baangt background mode works as before this change
-* Enhance existing documentation in docs-Folder in RST-Format
-* Unit-Tests in tests-folder providing reasonable coverage of the newly created functionality
-* git commit to feature branch and pull request on Gogs created

+ 0 - 0
TechSpecs/60 InProgress/__ignore.txt


+ 0 - 8
TechSpecs/70 Implementation done/CREATE_EXECUTABLE.md

@@ -1,8 +0,0 @@
-# Create executable using pyinstaller
-Apart from installing baangt via pip, cloning pip-repository and using the [docker container](https://gogs.earthsquad.global/athos/baangt-Docker)
-baangt should also be available as executable on Mac, Windows and Linux.
-
-# DoD:
-* Baangt executable was created and tested for either Mac or Windows (depending on your development system) as well as Linux (ubuntu)
-    * Versions: Ubuntu 64bit, Windows 10, Mac OS 10.13
-* Necessary adjustments to code (if any) were done (I only expect problems with Paths (either reading or writing) and tested.

+ 0 - 64
TechSpecs/70 Implementation done/PDFComparisonMicroService.md

@@ -1,64 +0,0 @@
-# Overview
-
-In test cases we may have test steps, that download a PDF from a source. So far, we can only download the PDF, but not 
-do anything with it.
-
-# Aim
-Aim of this TechSpec is to provide further support for working with PDFs. In a Twitter Poll 40% said, that they need
-PDF-comparison for their automation tasks.
-
-# Functional description
-We want to be able to:
-* Save a reference PDF for a test step
-    * The test step versioning must still work (e.g. we must be able to create one reference PDF for each stage (Dev, Pre-Quality, Final-Quality, Migration, etc.)). This can be done by uploading different reference PDFs, receive different UUIDs and maintain multiple lines of test steps for each version/stage.
-* Compare a newly created/downloaded PDF to the reference PDF
-* Define default deviations (for instance Dates, Times, Documentnumbers), that should not count as difference
-* Receive status info, whether the document matches the reference document
-* If it doesn't match get a visual representation of the difference
-* If it doesn't macht get a list of differences as delimited text
-
-## Out of scope:
-* No comparison of images between PDF-Files
-
-# Technical overview
-## UI
-There needs to be a Flask-UI to add, change and delete reference PDFs for Test steps. PDFs shall be stored as BLOB in 
-the database together with an UUID. The UUID is the reference parameter that will be used from test step to call into
-comparison service.
-
-The UI needs to provide a simple method to define exclusions of document contents from the comparison. The backend
-needs to provide a plugin/extension option for developers to develop more sophisticated comparison logic.
-
-## Backend
-The comparison functionality should be implemented as a separate microservice to be called by test execution. Results of the 
-microservice will be added to testDataDict in the test run. If there's a difference between the test runs, the two
-PDFs (reference and current PDF) shall be returned to the caller.
-
-Also creation and deletion of reference PDFs should be implemented using APIs, so that we're not depending only on the
-UI component, but can later implement further functionality.
-
-## baangt
-Service discovery and API-Call to microservice for PDF-comparison need to be implemented in baangt using a new command in TestStepMaster.py 
-"PDF-compare". Value1 will be the UUID of the reference-PDF within the micro-service. Value2 will be the downloaded PDF.
-If the result is not ``OK``, we shall add the text difference to a field in the output XLSX and embed the two updated documents (original, reference).
-
-### Deal with PDFs in Browser-Automation:
-In the step before "Compare" we'll need to download the PDF(s) or know, where the PDF was stored. 
-This needs to work on all platforms and drivers (FF, Chrome, Safari), also when using remote driver like in Selenium Grid V4 
-(integration currently under development) or Appium (branch "Appium" in GIT - should be finished by next week).
-
-# DoD:
-* Microservice created and pushed to new repository (not part of baangt repository) (sum: 9,5 hours)
-    * Creation of new items with upload of PDF, return of UUID, insert criteria to ignore differences (0..n RegEx Strings), 
-    description text (unlimited length). --> 6 hours
-    * Viewing of created items (show UUID and provide PDF for download) --> 0,5 hour
-    * Update of reference PDFs (replace existing PDF with a new upload) --> 1 hour
-    * Update of criteria to ignore differences (0..n RegEx Strings) and description text --> 2 hours
-    * Search for UUID and fulltext in description text --> 1 hour
-* baangt changes (deal with PDF-Downloads, new method "PDF-compare") committed to a new branch and tested (2 hours)
-    * Get familiar with the baangt codebase (in TestStepMaster.py mainly) (8 hours)
-    * test examples documented in /examples (1 hour)
-* Technical documentation up2date (classes have Docstrings, Methods where needed) (no additional effort)
-* Documentation of Microservice and updated documentation of baangt parameters in /docs-Folder (2 hours)
-* Unit-Tests with at least 80% code coverage written and successful (3 hours)
-* no critical linter errors or warnings (PEP-8 conformity of the code) (no additional effort)

+ 0 - 149
TechSpecs/70 Implementation done/PYSIMPLEGUI_UI_ADDITONS_1.md

@@ -1,149 +0,0 @@
-# To be updated based on this communication happened on 17.3.2020 on freelancer.com chat:
-
-ok, I would like to go task.
-https://gogs.earthsquad.global/athos/baangt/src/master/TechSpecs/30%20Ready/PYSIMPLEGUI_UI_ADDITONS_1.md
-
-
-I do was thinking to improve the Global Settings configuration from the first day.
-
-Below is my suggestion for Baangt framework as per standard guidelines:
-1> UI should not force to remember all keywords to User. All choices and their meaning should be visible in the form of DropDown, ComboBox, Checkbox.
-
-
-
-2> Easy way to Save / Load /Export Settings.
-
-Please check a reference Settings from TWS gateway, which I am very impressed with UI and easy management.
-https://gogs.earthsquad.global/athos/baangt/src/master/TechSpecs/30%20Ready/PYSIMPLEGUI_UI_ADDITONS_1.md
-Here you can see,User can see all possible settings and their explanation.
-So a NOOB user can understand easily how each setting activation and deactivation can affect the execution.
-User Avatar
-I was thinking to implement global settings via GUI Menu Box, which would load globals.json file and activate or deactivate necessary options.
-
-For settings which require True/False, we can use it as checkbox.
-For variable which require selected options like exportFilesBasePath we will use BROWSE directory function.
-
-These are example for improvement.
-Please let me know your views.
-Yes, that's great indeed. The different configurations need to load and save the settings. The entry screen should still be as clean and simple as possible (the way it is now before you klick "Details").
-Is this possible as pysimpleGui or do we need switch to QT?
-We will stick to PySimpleGui, as moving to QT will force Tester to install QT for their OS along with pyQT.
-
-
-But we can't ignore the performance of QT. Since it is compiled and will be more faster than python.
-?
-User Avatar
-Now I need to collect all possible settings we can do in Globals.json file, so that we can include all at single place.
-Where I can look for?
-Easy. Just a sec.
-User Avatar
-ok
-https://baangt.readthedocs.io/en/latest/ParametersConfigFile.html
-and after first chapter here: https://baangt.readthedocs.io/en/latest/Developer.html
-let me check
-ok, I got all settings.
-So here is my workflow:
---> Keep the existing methods to import globals.json settings as it is without affecting its features.
---> When user will click on "Details". It will switch to Settings Menu.
---> This can be accessed via File> Properties>Preferences
-
-Currently "Properties" submenu is not working or not implemented.
---> All settings like "Rlease", "TC.slowExecution" will be implemented internally. And User can see necessary settings automatically activated/deactivated based on globals.json file.
-
-(We will have to provide explanation for both Developer as well as Tester perspective. So Documentation will be updated).
-
-
---> If user make any changes in configuration, it will not implemented unless "Apply" Button is clicked.( We will add the button "Apply","OK","Cancel" to handle each case).
-
---> On Apply, The globals settings will be override with new settings.
-
---> on Close or Ok, the User will be switched to Main Screen.
-User Avatar
-How's that?
-Sounds good. I'm thinking to have a JSON-File with the available parameters, datatype and Mouseover-Texts rather than hardcoded in the screen. What do you think?
-Ok, So that json-file can be updated in future for new functionality. Mouseover-Texts will be used as hints to tell how this setting works.
-
-It will be like, updating a Json object( Initially with all available settings) with another Json file(here, globals.json).
-
-And User will see all settings activated or deactivated at single places. So that he/she can modify based on the requirement.
-User Avatar
-?
-I'd see the ui parameters JSON as something different than the globals.json. Ultimately of course the settings, that were chosen in the UI will be written to the Globals-File(s). But the definition, which elements, which data types, which mouse-over-texts will not be in globals.json.
-User Avatar
-okay. I think if we will implement settings like we have done for Address Create (singleton class). and globals.json file will activate and deactivate necessary settings.
-
-what I am trying to think is that.
-1> A baangtsettings.json file which store details of each settings as dictionary object.
-
-Suppose, Release Variable. It will be stored like.
-
-{"variableName":"Release","hint":"set particular version to test ":,"type":"Input Box","value":"","displayText":"Current Release"}
-
-Similar like we process form field.
-
-And in globals.json file,
-if there is Release=0.1.dev
-
-
-
-It will update above dictionary object as:
-
-{"variableName":"Release","hint":"insert particular release here":,"type":"Text","value":"0.1.dev
-"}
-
-So, We will store other variables also.
-
-And in UI it will be diplayed based upon input Type. like, if "type":"bool", we will display setting as checkbox. Example
-TC.Network
-.
-
-"variableName":"TC.Network
-","hint":"Enable / Disable network statistics":,"type":"Bool","value":"False", "displayText":"EnableNetwork"}
-
-So, Each settings will have keys ["variableName","hint","type","value","displayText"]
-
-variableName --> mapped to global settings file
-hint ---> Display text
-type ---> to display as checkbox, inputBox
-value ---> This is the field that will be updated for each Globals.json file
-displayText ---> This will be displayed on Frontend to User.
-
-
-This is my plan. May be you have better than this, I would like to know as well.
-Value should be able to have a json-element for dropdown-values. Other than that it's pretty much exactly what I thought about!
-User Avatar
-ok, so for dropdown, it will be like <select> tag with <option>
-Yes, sounds good
-
-# Scope of this enhancement
-
-UI has currently just very basic functionality. Enhancements are needed in the following areas:
-
-## More comfortable handling of Global Parameters
-Right now in baangt.UI.UI when you chose any global*.JSON the parameters and values from this file are displayed. 
-4 empty lines are added to give the user a chance to add more parameters/values. In long configuration files, this
-leads to an overflowing window.
-
-### Solution approach
-Keep the global parameters/values in a sorted dict. Keep for instance 10 parameter/value-pairs on the UI (just as
-parameter-01, value-01, parameter-02, value-02 and so on). Add a vertical scroll-bar on the UI (if there are more 
-than 10 parameters (including the 4 empty ones)). 
-
-In the initialization loop over the first 10 entries of the globals-Dict and fill in parameter-01, value-01 until 
-parameter-10 and value-10. 
-
-When the user scrolls on the vertical scroll-bar, e.g. to position 4: Read the globals-Dict from position 4. Fill in
-UI-Element parameter-01 with parameter 4 from globals-Dict, value-01 with value 4 from globals-Dict, and so on.
-
-## Global Parameters as dropdown with additional values:
-Right now the user must know the allowed parameter names in order to tune the globals-file. That's not ideal. It would
-be better if at least standard values are in a dropdown and only customer specific values must be known.
-
-### Solution approach:
-For empty entries (Currently 4 at the end of the list):
-Change the Parameter-fields in the UI to be Dropdowns. Have a method to set the default entries. Enable manual 
-addition of values by the user. 
-
-For filled entries:
-No dropdown. Instead show a "delete"-Button for each row. After pressing the delete-button remove the entry and reload
-the Window (hide/unhide on Mac doesn't really work. It destroys the layout. Reloading works well)

+ 0 - 0
TechSpecs/70 Implementation done/__ignore.txt


File diff suppressed because it is too large
+ 0 - 126
TechSpecs/99 Done/CONCEPT_DB_UI.md


+ 0 - 41
TechSpecs/99 Done/DATABASE_AND_UI.md

@@ -1,41 +0,0 @@
-# TechSpec TestRun-Database
-The database is a an alternative to creating testrun settings for baangt in XLSX. 
-Working with the database provides - other than XLSX - also the option to reuse elements (TestCaseSequence, TestCase, TestStepSequence) between testruns, while in XLSX-Format there is no referencing, only Copy+Paste (and resulting maintenance issues)
-
-## Create Database and UI for Testrun Definitions
-Database and UI should be implemented using FLASK and ORM. Database SQLite is enough for now.
-### Main entities
-* Testrun
-* TestCaseSequence (n:m to TestRun)
-* DataFiles (n:m) to TestCaseSequence 
-(For now baangt.py supports only 1 dataFile. Later this will be refactored to support multiple files. Also there will be an option to connect to a database and use Query as input)
-* TestCase (n:m) to TestCaseSequence
-* TestStepSequence (n:m) to TestCase
-* TestStepExecution (1:n) to TestStepSequence
-
-### Supporting entities
-When a new database is created all entries in supporting entities shall be created (by ORM - not any db-specific command)
-* GlobalTestStepExecution (identical to TestStepExecution table but for reusable TestSteps)
-* ClassNames (Value table for Classnames in TestCaseSequence, TestCase, TestStepSequence)
-* BrowserTypes (Value table for TestCase->BrowserType).
-  * Values: FF, Chrome, IE, Safari, Edge
-* TestCaseType (Value table for Testcase->TestCaseType)
-  * Values: Browser, API-Rest, API-SOAP, API-oDataV2, API-oDataV4
-* ActivityType (Value table for TestStepExecution->Activity)
-  * Values: lots - see Note in Tab `TestStepExecution` in column `Activity`
-* LocatorType (Value table for TestStepExecution->LocatorType)
-  * Values: xpath, css, id
-
-Supporting entities shall have language/locale depending descriptions, that will be used in the UI to display tooltips and/or explanations in Dropdown-Fields.
-  
-## Create the UI
-Hierarchical display of testruns and all their subsequent entities. Most probably something like a Tree would be good with +/- Buttons to add/remove elements. This UI-Element must be searchable and show filtered search result after a few characters are typed.
-
-### Special treatment of Global Variables
-GlobalVariables are stored in `baangt.base.GlobalConstants.py` - these variables shall be available at several places, additionally to manually entered values (see excel-sheet `DropsTestRunDefinition`)
-
-### Testdatafiles:
-Headers of testdatafiles must be read, so that the column names are available for selection in TestStepExecution-Steps for use in Column `Value` or `Value2` 
-
-### Execution
-There should be a "Run"-Button, which can be pressed whenever the user is inside a testrun (or any level below). When the button is clicke, all changes shall be saved to the database. `baangt.py` shall be called with the testrun-name of the currently active testrun in the UI. Further parameters need to be discussed.

+ 0 - 6
TechSpecs/99 Done/DOWNLOAD_BROWSER_DRIVERS.md

@@ -1,6 +0,0 @@
-# Result
-If the browser drivers (Chromedriver, Geckodriver) for the current operating system can't be found in 
-`Path(os.getcwd()).joinpath("browserDrivers")` download latest Chromedriver and Geckodriver and unpack them, so that the application can use them.
-
-# Implementation
-In class `baangt.base.BrowserHandling.BrowserDriver` in Method `createNewBrowser` call a new method to identify, whether the browserDrivers are in the expected location and if not, download them accordingly.

+ 0 - 64
TechSpecs/99 Done/KatalonRecorderImporter.md

@@ -1,64 +0,0 @@
-# Situation
-
-In UI (baangtIA.py without any parameters opens the simple starter UI) there's a button with text "Import Katalon Recorder". 
-On pressing the button another pySimpleGui Window opens. In this window one can import (from clipboard) the exported result
-from Katalon Recorder (= Plugin in Chrome/FF to record browser interaction) and translate the contents to ```baangt``` format.
-
-On "Save as"-Button a new XLSX Testrun-Definition in Simple format is created. All Teststeps from the recording are included
-in the XLSX.
-
-This works quite well. But it's not pretty (the UI) and the functionality is not complete or at least hard to use: 
-
-In order to use the resulting
-XLSX, the users have to do tedious manual steps (Create variable names (=columns) in tab "data", move entered data from TestStep into
-tab "data" into each column and finally replace cells in column "Value" of TestSteps with the variable names, they just
-created (=the columns in tab "Data")).
-
-# Aim
-
-When a recording is imported, all Values should be extracted as columns in the data-tab. For instance, if you have a recording
-
-```markdown
-gotoUrl | http://franzi.com 
-Click   | //@ID='Button1'
-SetText | //@ID='TextInput2' | testTextTestText
-```
-
-right now we translate this into correct simpleFormat, but we'll copy the value "testTextTestText" into the field "value"
-of the Teststep. This is not practical. Users will have entered this text just as an example and want to use a variable to
-dynamically replace this fixed text.
-
-We shall extract all those variables, store them as columns in the Tab ```data```, set the field "Value" of the TestStep
-to the variable name (e.g. ```$(TextValue1)```) and store the value from the recording in the proper field of the tab data in Line 1.
-
-We shall do the same for Clicks. Create column "Button<n>" in tab data, set the testStepActivity to "ClickIF" and place 
-column-name in the Value-field of the TestStep (e.g. ``$(Button001)``)
-
-From the above example the tab data would look as follows:
-
-```
-Url               | Button001 | TextValue1       |
-http://franzi.com |   X       | testTextTestText |
-```
-
-The Tab ```TestStepExecution``` would look as follows:
-
-```
-Activity  | LocatorType | Locator             | Value 
-GOTOURL   |             |                     | $(Url) <-- Column in tab 'data'
-CLICKIF   | XPATH       | //*[@id=Button1]    | $(Button001)   <-- Column in tab 'data'
-SETTEXT   | XPATH       | //*[@id=TextInput2] | $(TextValue1) <-- Column in tab 'data'
-```
-
-# UI
-
-If you have any suggestions, how to improve the UI of the Katalon Recorder Import dialogue, please get in contact.
-
-# Output
-
-The resulting XLSX is not prettyfied, e.g. column widths are standard and not formatted, the header lines are not in bold, etc.
-Please see methods in ``baangt.base.ExportResults.ExcelSheetHelperFunctions`` and apply here too.
-
-# Test
-
-Create unit tests for new functionality

+ 0 - 16
TechSpecs/99 Done/LogNetworkTraffic.md

@@ -1,16 +0,0 @@
-# Aim
-So far via class ```timing``` the durations of webpages are logged. In some performance testing and analysis jobs this
-is not enough. Additionally we need the network traffic stored for each request.
-
-# Vision
-Activated via TestRun-Parameter or in Globalsettings the network traffic needs to be stored. In `ExportResults` the 
-network traffic should be stored in a separate Tab in the output-XLSX (Status, Method, URI, Type, Size, Headers, Params, Response)
-for each activity done in the browser
-
-# Implementation idea:
-Usage of https://github.com/browserup/browserup-proxy and the corresponding Python Package. Dynamically create a proxy 
-for each active browser (and pass the new proxy-URL to the browser in ``baangt.base.BrowserHandling``. 
-Read the logs of browserup-proxy after finished requests and store result together with the
-```Teststep```. Other ideas welcome.
-
- 

+ 0 - 32
TechSpecs/99 Done/MULTIPROCESSING_REFACTOR.md

@@ -1,32 +0,0 @@
-# Multiprocessing
-In baangt.TestCaseSequenceMaster.py in method execute_parallel the class TestCaseSequenceParallel is called and executed.
-
-This works as designed on Mac and Ubuntu. But doesn't work on Windows.
-
-Also on Mac and Ubuntu the performance using this technique is not ideal, as all executions wait for the last task to 
-finish before they start a new iteration.
-
-# Goals
-Change from process based multiprocessing to thread based parallel processing (using standard module Threads instead of 
-Multiprocessing).
-
-Executions run in real parallel, for instance if we have 4 parallel executions, each one of them starts the next
-test case as soon as it was finished with the test case before.
-
-# Current implementation
-Current implementation is in baangt.TestCaseSequence.TestCaseSequenceMaster.py and from there calls 
-baangt.TestCaseSequenceParallel.py when parameter ``ParallelRuns`` has a value greater than 1.
-
-# Further documentation:
-baangt documentation is on https://baangt.readthedocs.io
-Repository is on https://gogs.earthsquad.global/athos/baangt
-
-# DoD (including effort indication):
-* Usage of `threading` and `Queue` instead of `multiprocessing` library to start parallel browsers and execute testcases
-in parallel. (1h)
-    * After execution of Testcase latest data from Testcase (`TestdataDict`) is updated in `TestRun.py` (=same behaviour as 
-        in the current implementation) (no effort)
-* Functionality was tested locally on either Linux and/or Windows and/or Mac and results documented (e.g. Log-File 
-showing successful execution) (2h)
-* Unit test cases in /tests for all touched methods created and successful (3h)
-* Pull request was created (no effort)

+ 0 - 15
TechSpecs/99 Done/Plugin-Readyness.md

@@ -1,15 +0,0 @@
-# Current situation
-baangt classes are well structure and generally follow the principle of separations of concern. Users can easily subclass
-existing baangt-classes. Depending on the requirements the user might end up with subclassing a lot of classes and overwrite
-a lot of methods.
-
-Each method, that doesn't call super().<method>() means danger of upcoming breaking changes.
-
-Sometimes it would be easier to implement a Plugin than subclassing.
-
-# Aim of this task
-Prepare baangt classes and methods for usage of [pluggy](https://pluggy.readthedocs.io/en/latest/). Implement pluggy-entry points in 
-* baangt.base.BrowserHandling
-* baangt.base.TestRun
-* baangt.base.Timing
-* baangt.base.ExportResults

+ 0 - 13
TechSpecs/99 Done/READ_THE_DOCS.md

@@ -1,13 +0,0 @@
-# Integration with ReadTheDocs.io
-The aim of this task is to provide automatically generated documentation from source code using ``Sphinx`` so that we
-can automatically update [ReadTheDocs](http://readthedocs.io) and docs-pages on [Gogs](https://gogs.earthsquad.global) and later on Github.
-
-Implementation should follow roughly the steps from [this tutorial](https://daler.github.io/sphinxdoc-test/includeme.html).
-
-# DoD:
-* Sphinx was configured properly in ``conf.py`` so that generation of extracted documentation from the source code happens
-* `index.rst` and *.rst-Files for all relevant paths were built properly (using `sphinx-apidoc ../ --output-dir docs`)
-* Necessary GIT-Branch for `gh-pages` was created and `Makefile` updated accordingly.
-* Contents of ``README.md`` were transformed into `Readme.rst`, properly formatted and all necessary steps taken to ensure
-readability on github pages. 
-* Pull request created and result of local test attached to pull request

+ 0 - 54
TechSpecs/99 Done/ResultDirectoriesRelocate.md

@@ -1,54 +0,0 @@
-# Situation
-
-So far we're using os.getcwd() to get the current working directory. Everything works fine on direct download of the
-repository or unzipped Zip-Files from https://github.com/athos1972/baangt-executables.
-
-Even the Windows-installer works fine, but when the installed EXE is started, it needs Admin-Rights, because we're 
-accessing the c:\Programs - folder. 
-
-# Aim
-
-If installed on Windows, behave like a normal Windows-Application. Save data in \User\baangt-Folder and use C:\Programme 
-only for the sources.
-
-# Implementation
-
-Unfortunately there are many places in the code, where we access directories. Some of them already have parameters, 
-which are derived during runtime (but so far wrongly derived).
-
-Some logs (Browsermob-Proxy) need updates in configuration files
-
-For Firefox, Chromedriver and Edge/Chromium the log-directory needs to be set in the code (currently not done, which
-makes them also log into os.getcwd(). )
-
-After this TechSpec was implemented, all accces to the file system should be opinionated according to the OS
-and installer-option (pyInstaller (executable) vs. python script execution ``python3 baangtIA.py``).
-
-## Separate class
-
-It would be a good thing to have all writing file system accesses in one class and have a method for each
-OS. Inside the class we could determine, on which OS we are and whether or not we're in pyInstaller-mode
-or as executable in Windows.
-
-Methods could be: 
-* getScreenshotPath
-* getLogPath
-* getOutputDirectoryPath
-* getDatabasePath
-* getIniPath
-etc.
-
-This class must also respect paths, that are given by the user (like e.g. GC.PATH_SCREENSHOTS, 
-GC.PATH_EXPORT) and only make assumptions when these paths are not defined by the user.
-
-## How to find places to replace code
-os.getcwd() and pathlib are used throughout the code base to determine paths. Start there and look, what happens with
-the results. If write-access happens, encapsulate in the above mentioned class.
-
-# DoD
-* Script for creation of Windows Executable updated to create folders also in Users Home-Directory
-* Script for creation of Windows Executable updated to change log folder for browsermob-proxy
-* All file system access on Mac, Ubuntu and Windows work like now when installed from GIT-Repository or unzipped ZIP-File
-* All file system access on Windows work in Users-Directory if installed via the installer.
-
- 

+ 0 - 18
TechSpecs/99 Done/TECHSPEC_RUNLOG.md

@@ -1,18 +0,0 @@
-# Testrun Log and Reporting
-For easy comparison between stages, software versions and testcases we want to have a database of testruns
-
-## Phase 1
-### Create Database and entities
-create (if not exisits) database and entities to save:
-* *Testrun-Name (from `baangt.base.TestRun.testRunName`)
-* *Logfile-Name (from `__init__.py->logFilename`)
-* *Start-Timestamp/End-Timestamp (Call to `Timing.takeTime` when TestRun starts and stops)
-* values from globals.json (if used)
-* *Datafile(s) used
-* *Count of Testcases in each status (OK, Failed, Paused --> Values from `baangt.base.GlobalConstants.py` )
-
-*This logic  is already implemented in ExportResults.py and can be reused for the planned functionality.
-
-### Extend TestRun.py to write into the database
-* In the method `tearDown` of `baangt.base.TestRun.py` add a call to store the testrun execution data into the database.
-* Alternatively use the existing (and already called) ExportResults.py class directly

+ 0 - 10
TechSpecs/99 Done/UnitTests_Part1.md

@@ -1,10 +0,0 @@
-# Aim
-So far there were no unit tests implemented in ``baangt``. Aim of this task is to provide unit tests for the following 
-classes/methods using pytest:
-
-* baangt.base.TestRun getSuccessAndError
-* baangt.base.TestRun setResult
-* baangt.base.IBAN getRandomIBAN
-* baangt.base.BrowserHandling slowExecutionToggle
-* baangt.base.BrowserHandling takeScreenshot
-

+ 0 - 39
TechSpecs/99 Done/Versioning.md

@@ -1,39 +0,0 @@
-# Aim
-
-In ``baangt`` simple XLSX and in complete XLSX-Formats there's a field ``Release`` for each TestStep. Right now this field
-is not interpreted in the program.
-
-The intended use of the field is described in the Docs in "SimpleExample" toward the end. Basically we should have an option
-to conditionally (per stage) activate/deactivate testSteps (Example: In DEV-Stage you need to fill a field, which doesn't
-(yet) exist in Quality-Stage. Instead of copying the test case and adding one statement in the Dev-Copy you'd add one TestStep
-in the Testcase and put ">= " + the proper version of the Dev-Stage in the field "Release").
-
-# Prerequisits
-
-PyPi already supports several different approaches to software version numbering. We should implement the same logic in 
-``baangt`` standard. If the user needs something else, they'll need to subclass the method. It's a prerequisit for this task
-to understand the existing version logic used by PyPi.
-
-# Field characteristics
-* The field may be empty. In this case the TestStep will run no matter which version was set in Globals
-* The field may have a condition in the first two characters ("<", ">", "=", "<>", "<=", ">=") followed by blank and 
-the version number (e.g. ``>= 2019.05b``)
-
-# Implementation
-
-* First we need to extend BrowserDriver-Methods (``findBy`` and all callers) with a new optional variable ``release``. 
-* A new parameter ``Release`` needs to be added to UI.py with a default value of None and stored in all globals.json. (Same logic
-as was already added for "TC." + GC.DATABASELINES). 
-    * If this parameter is set in Globals and a TestStep's line has the
-field ``release`` filled, we shall use the PyPi-Logic to see, if the current version qualifies to run this line (this method
-should be encapsulated, so that it can be easily subclassed by users).
-    * If the line doesn't quality for execution, we shall document in the logs (Level=Debug), that we skipped this line due to 
-{version_line} disqualifies according to {version_globals} and return to the caller.
-* Additionally check, if the field is included in flask implementation. If not, add it to the model and prepare migration.
-* Also make sure the field "Release" is correctly mapped in TestStepMaster.py, so that the value reaches the ``findBy`` methods.
-
-# Test
-
-* Write unit tests to show behaviour of comparison of version between globals and TestSteps.
-* Write unit tests to show behaviour if no value was set in globals, but value in TestStep.
-* Write unit zest to show behaviour if value is set in globals, but no value in TestStep.

+ 0 - 62
TechSpecs/99 Done/XLSX2BaangtDBAndViceVersa.md

@@ -1,62 +0,0 @@
-# Current Situation
-
-We've currently 3 loosely connected ways how to define test runs (and testCaseSequences, testCases, TestStepSequences 
-and teststeps) in baangt:
-
-* SimpleFormat (XLSX): Has only 2 tabs ("TestStepExecution" and "data"). Very fast, very simple. No options for further
-configuration except the options that come from "overriding" in globals. When interpreted by baangt interactive starter
-or baangt CLI we actually create a complex XLSX with default values.
-* Complex XLSX: Has a lot of tabs and all options, that we have in baangtDB. Multiple TestCases, Browsers, data files, 
-and so on.
-* BaangtDB: The flask app, that enables the users to have real modular test sequences and reuse sequences across all 
-structural elements (e.g. Login-Page, commonly used "cards" in multiple test cases, etc.).
-
-Directly connected to these definitions are the two following options to create customer/installation specific 
-functionality:
-* Subclassing: Some or all of the structural items of baangt standard are being subclassed and used instead of the standard
-classes (only possible in Complex XLSX and in BaangtDB - where the executing class is a parameter).
-* Plugins: Methods of the standard baangt can be redefined by using and activating plugins. These changes are also 
-effective when using SimpleFormat (e.g. one could override the data source of SimpleFormat-XLSX to be something else)
-
-# Problem
-
-There is no support for switching between the options. Users need to be able to export and import when baangtDB is the
-center of the installation.
-
-## Processes
-
-### Update of existing test cases:
-Assume baangtDB is run by the central testmanagement department of the organization. baangt simple format is used by the business
-departments all over the world. A new release of their software is about to come out and they need test cases to be 
-adjusted to the local/regional specialities in pre-production stage. Central testmanagement department doesn't know, which specifics these are, 
-but they know, which test cases aren't working on pre-production stage. Central testmanagement department wants to
-generate SimpleFormat XLSX or Complex XLSX from a baangtDB-Testcase and send this XLSX to business departments in order to fix the 
-definitions. (= Export from baangtDB to Complex XLSX).
-
-### Adding new test cases:
-Business department tests a new functionality and is happy with it. They want the testcase to be included to regression
-test set. They send the SimpleFormat XLSX or Complex XLSX to central testmanagement department, who need to add those
-test cases to certain test runs, defined in baangtDB. (= Import Complex XLSX to baangtDB).
-
-# Implementation
-
-## Flask import
-* In Flask app create an import button on level Testrun. 
-* When button is pressed upload XLSX, run through converter and save resulting objects in 
-testrun-Database. 
-* Show log of import (especially the object names, that were created)
-
-## Flask export
-* In Flask app create an export button on level Testrun.
-* When button is pressed, create complex XLSX-Format. Filename = <Testrun-name><timestamp>.xlsx
-* Download resulting XLSX to Frontend
-
-# Scope / DOD
-This task is considered completed after:
-* Implementation provided (in line with coding standards and PEP-8 conformity)
-* Functional test executed and passed
-* Existing functionality not compromised - baangt Interactive Starter, baangt CLI and baangtDB work as before this change
-* Enhance existing documentation in docs-Folder in RST-Format
-    * For this task there will be a new page in Docs (Import/Export in baangtDB)
-* Unit-Tests in tests-folder providing reasonable coverage of the newly created functionality
-* git commit to feature branch and pull request on Gogs created

+ 0 - 62
TechSpecs/99 Done/addressCreation.md

@@ -1,62 +0,0 @@
-# Creation of address data
-
-## Prerequisits
-
-In many cases test master data (in this case addresses) needs to be created before any transactional data (e.g. sales orders,
-user accounts, shipping information) can be processed.
-
-Depending on the industry and use-case generation of this data might be of critical importance for the subsequent processes,
-e.g. to test shipping fees for an international forwarding company you'll likely need test data in different countries.
-
-When testing e.g. for property insurance you'll even need more distinct data to test business logic, that derives premium 
-discounts and surcharges based on risk attributes of a certain address.
-
-The more complex the data requirement is, the more likely testers start to prepare "their" own syntectic test data set (e.g.
-1 address per parameter to test) and re-use this data over and over again for manual and automated tests. Very often this
-leads to undetected errors (this one combination obviously works, but a slight deviation with real world data brings 
-errors, that linger undetected in productive system) or false errors (e.g. a customer, who has 10.000 insurance contracts 
-in Test-system and brings this in turn leads to wrongfully reported performance bottlenecks (which will never happen in 
-production). This happens over and over again in many organizations. Time and money spent for wrong analysis hurts double! 
-Once because it's money spent on activities, that don't deliver value. Second because time spent on those activities means 
-less time for important improvements.). 
-
-## Aim
-
-``baangt`` should provide an easy and easily extendable way to generate address data for a test case. 
-
-## Implementation
-
-There needs to be a new activity "ADDRESS_CREATE" in SimpleFormat (``TestStepMaster.py``). Field ``value`` = optional list of 
-attributes (see below), ``value2`` = prefix for fieldnames (optional).
-
-When this activity is called, we shall call a singleton class ``AddressCreate`` using the (optional) parameters from the
-Value-Field. This class is **not** in scope of the current specification. The method ``returnAddress`` provides a dict of 
-fields and values, which shall be stored in TestDataDict (with (optional) prefix taken from value2).
-
-After the fields were filled, they'll be used by the test case to fill values for the UI or API.
-
-### Fields
-The following fields are part of the return value of method ``returnAddress``:
-* CountryCode**
-* PostlCode**
-* CityName
-* StreetName
-* HouseNumber
-* AdditionalData1
-* AdditionalData2
-
-**These fields can be used as filter criteria in field ``value``. JSON-Format should be supported. Example of field ``Value``: {CountryCode:CY, PostlCode: 7*}.
-Values must be mapped into a ``Dict``.
-
-If a prefix was povided in field ``Value2``, the fieldnames shall be concatenated with this prefix, e.g. if 
-prefix = ``PremiumPayer_``, then the resulting field for ``CountryCode`` in testDataDict would become 
-``PremiumPayer_CountryCode``.
-
-## Scope / DOD
-* Implementation provided (in line with coding standards and PEP-8 conformity)
-* Functional test executed and passed (in this case it means to also create a new simpleXLS-Format for a site where
-address data can be entered)
-* Enhance existing documentation in docs-Folder in RST-Format
-    * in this case in simpleExample.rst and SimpleAPI.rst
-* Unit-Tests in tests-folder providing reasonable coverage of the provided functionality
-* git commit to feature branch and pull request on Gogs created

+ 0 - 26
TechSpecs/99 Done/assertions.md

@@ -1,26 +0,0 @@
-# Situation
-Currently ``baangt`` supports on the WEB click, settext, iframes, windowhandling and so on. There's also a way to access
-values from elements and write them back to ``testDataDict``. But there are no classical assertions implemented.
-
-# Aim
-Assertions should be available in all test technologies (currently API and Browser). Assertions should follow the 
-existing logic by using method ```findBy``` (similar to current implementation in ``findByAndWaitForValue``). This method
-retrieves the value of an element but doesn't compare to a reference value. It could be a good base though for the assert
-method (just call ``findByAndWaitForValue`` and compare result to given reference parameter in ``testdataDict``)
-
-The simplest type of assertions are to read the attribute or text of an element and compare to a variable from ``testDataDict``.
-They should also be available in Excel Simple Format (implementation in ``TestStepMaster`` in method ``executeDirect``). 
-
-A more complex situation comes up, when the assertion is checked against an API. The sequence would be
-* Execute SetText or Click or any sequence of activities
-* Execute an API-Call, receive result
-* Compare specific part of the result with a field value from ``testDataDict``
-
-The most complex functionality is needed in asynchronous assertions, either by callback or from batch processing. This 
-needs to be implemented by customer specific routines. The main point here is to set the Testcase into condition 
-```GC.TESTCASESTATUS_PENDING``` and have a unique ID of the testrun and the TestCase-ID, that can be matched/found by 
-asynchronous trigger, which would later set Testcasestatus accordingly (failed, OK).
-
-# Implementation
-For now we'll only implement the simplest type of assertions for Browsers and later tend to more complex implementations.
-

BIN
TechSpecs/ScreenshotWindowsAfterLatestCommit.png


+ 0 - 9
TechSpecs/_ProcessWithContributers.md

@@ -1,9 +0,0 @@
-# Process to deal with TechSpecs:
-
-* New ideas --> Folder "01 in Creation"
-* Functional part speficied --> Folder "30 Ready"
-* Technial concept --> Folder "40 Ready for Implementation"
-* currently implementing --> "60 InProgress"
-* Pull Request created --> "70 Impl Done"
-* Merged to Candiate or Release branch --> "80 Merged"
-* Tested and deployed --> "99 Done"

+ 0 - 85
analyzer_example.py

@@ -1,85 +0,0 @@
-import pytest
-import json
-import uuid
-from datetime import datetime
-from sqlalchemy import create_engine
-from sqlalchemy.orm import sessionmaker
-from baangt.base.DataBaseORM import DATABASE_URL, TestrunLog, GlobalAttribute, TestCaseSequenceLog
-from baangt.base.DataBaseORM import TestCaseLog, TestCaseField, TestCaseNetworkInfo
-
-# db interface object
-class dbTestrun:
-	def __init__(self, db_url):
-		engine = create_engine(db_url)
-		self.session = sessionmaker(bind=engine)()
-
-	def printTestrunList(self):
-		#
-		# prints the list odf Testruns in DB
-		#
-		print(f'{"UUID":40}Name')
-		for log in self.session.query(TestrunLog).all():
-			print(f'{str(uuid.UUID(bytes=log.uuid)):40}{log.testrunName}')
-
-	def printTestrunSummary(self, testrun_uuid):
-		#
-		# prints summary of the specified Testrun
-		#
-		tr_log = self.session.query(TestrunLog).get(testrun_uuid)
-		print(f'\n{"*"*10} Summary for Testrun {uuid.UUID(bytes=testrun_uuid)} {"*"*10}')
-		print(f'\nUUID\t{uuid.UUID(bytes=testrun_uuid)}')
-		print(f'Name\t{tr_log.testrunName}\n')
-		print(f'Testrecords\t{len(tr_log.testcase_sequences[0].testcases)}')
-		print(f'Successful\t{tr_log.statusOk}')
-		print(f'Paused\t\t{tr_log.statusPaused}')
-		print(f'Error\t\t{tr_log.statusFailed}')
-		print(f'\nLogfile:\t{tr_log.logfileName}\n')
-		print(f'Start time\t{tr_log.startTime.strftime("%H:%M:%S")}')
-		print(f'End time\t{tr_log.endTime.strftime("%H:%M:%S")}')
-		print(f'Duration\t{tr_log.endTime - tr_log.startTime}')
-
-		print(type(tr_log.startTime))
-		print(type(tr_log.uuid))
-
-	def printTestrunGlobals(self, testrun_uuid):
-		#
-		# prints global settings for specified testrun
-		#
-		print(f'\n{"*"*10} Global Settings for Testrun {uuid.UUID(bytes=testrun_uuid)} {"*"*10}\n')
-		for log in self.session.query(GlobalAttribute).filter(GlobalAttribute.testrun_uuid == testrun_uuid):
-			print(f'{log.name:25}{log.value}')
-
-	def printTestrunTestcases(self, testrun_uuid):
-		#
-		# prints Testcase info of the specified Testrun
-		#
-		tc_counter = 0
-		print(f'\n{"*"*10} Testcases of Testrun {uuid.UUID(bytes=testrun_uuid)} {"*"*10}')
-		for tc in self.session.query(TestCaseLog).filter(TestCaseLog.testcase_sequence.has(TestCaseSequenceLog.testrun_uuid == testrun_uuid)):
-			tc_counter += 1
-			print(f'\n>>> Testcase #{tc_counter}')
-			print(f'\nUUID\t{uuid.UUID(bytes=tc.uuid)}')
-			for log in self.session.query(TestCaseField).filter(TestCaseField.testcase_uuid == tc.uuid):
-				print(f'{log.name:25}{log.value}')
-
-# execution code
-if __name__ == '__main__':
-	
-	# target Testrun
-	testrunName = 'example_googleImages.xlsx_'
-
-	# interface object
-	db = dbTestrun(DATABASE_URL)
-
-	# get target testrun
-	item = db.session.query(TestrunLog).filter(TestrunLog.testrunName == testrunName).first()
-	if item is None:
-		print(f'ERROR. Testrun \'{testrunName}\' does not exist in DB')
-		print('\nAvailable Testruns in DB:')
-		db.printTestrunList()
-		exit()
-	
-	# print Testrun Report
-	db.printTestrunSummary(item.uuid)
-	db.printTestrunGlobals(item.uuid)
-	db.printTestrunTestcases(item.uuid)

+ 1 - 1
baangt/base/PathManagement.py

@@ -46,7 +46,7 @@ class ManagedPaths(metaclass=Singleton):
         """
         Will return path where Log files will be saved.
 
-        This Path will be taken from Paths.json
+        This Path will be taken from old_Paths.json
 
         :return: Logfile path
         """

example_googleImages.json → examples/example_googleImages.json


+ 0 - 29
execMac.sh

@@ -1,29 +0,0 @@
-#!/bin/sh
-
-rm -r dist
-rm -r build
-
-pyinstaller --clean --onedir \
-	--distpath exec_mac/ \
-	--workpath exec_mac/build \
-	--specpath exec_mac \
-	--windowed \
-	--name baangt \
-	--add-data '../baangt/ressources/baangtLogo2020Small.png:ressources' \
-	--add-data '../examples/:examples/.' \
-	--add-data '../browsermob-proxy:browsermob-proxy/.' \
-	--noconfirm \
-	baangtIA.py
-
-# Remove Screenshots and Logs
-rm -r exec_mac/baangt/examples/Screenshots
-rm -r exec_mac/baangt/examples/Logs
-rm -r exec_mac/baangt/examples/1testoutput
-
-# Create ZIP-file
-mkdir executables
-rm executables/baangt_mac_executable.zip
-zip -r -X executables/baangt_mac_executable.zip exec_mac/baangt/
-
-# Remove Build-Folder
-rm -r exec_mac

+ 0 - 23
execUbuntu.sh

@@ -1,23 +0,0 @@
-#!/bin/sh
-
-pyinstaller --clean --onedir --noconfirm \
-	--distpath ubuntu/ \
-	--workpath ubuntu/build \
-	--specpath ubuntu \
-	--name baangt \
-	--add-data '../baangt/ressources/baangtLogo2020Small.png:ressources' \
-	--add-data '../examples/:examples/.' \
-	--add-data '../browsermob-proxy:browsermob-proxy/.' \
-	baangtIA.py
-
-# Remove Screenshots and Logs
-rm -r ubuntu/baangt/examples/Screenshots
-rm -r ubuntu/baangt/examples/Logs
-
-# Create ZIP-file
-mkdir executables
-rm executables/baangt_ubuntu_executable.tar.gz
-tar -zcvf executables/baangt_ubuntu_executable.tar.gz ubuntu/baangt/
-
-# Remove build folder
-rm -r ubuntu

+ 0 - 17
execWindows.bat

@@ -1,17 +0,0 @@
-rm -r dist
-rm -r build
-rm -r executables
-
-pyinstaller --noconfirm --path C:/Users/buhl/git/baangt/venv3_6/Lib/site-packages windows/baangtWindows.spec
-
-rem Remove Screenshots and Logs ^
-rm -r dist/baangt/examples/Screenshots
-rm -r dist/baangt/examples/Logs
-rm -r dist/baangt/examples/1testoutput
-rm -r dist/baangt/Logs
-
-
-rem Create ZIP-file
-mkdir executables
-rm executables/baangt_windows_executable.zip
-powershell Compress-Archive dist/baangt/. executables/baangt_windows_executable.zip

Paths.json → old_Paths.json


+ 0 - 40
requirements_dev.txt

@@ -1,40 +0,0 @@
-Appium-Python-Client==0.52
-alabaster==0.7.12
-Babel>=2.8.0
-bump2version>=1.0.0
-beautifulsoup4==4.8.2
-browsermob-proxy>=0.8.0
-coverage==5.0.4
-docutils==0.15.2
-dataclasses>=0.6
-dataclasses-json>=0.4.2
-faker>=4.0.2
-gevent>=1.5.0
-lxml==4.4.2
-MarkupSafe==1.1.1
-more-itertools==8.0.2
-openpyxl>=3.0.3
-packaging==20.0
-Pillow==7.0.0
-pkginfo==1.5.0.1
-pluggy==0.13.1
-Pygments==2.6.1
-pyperclip==1.8.0
-pyqt5>=5.1.14
-pytest>=5.4.0
-python-dateutil==2.8.1
-readme-renderer==24.0
-requests==2.22.0
-requests-toolbelt==0.9.1
-schwifty==2020.1.0
-selenium==3.141.0
-six==1.13.0
-Sphinx==2.3.1
-sphinx-rtd-theme==0.4.3
-SQLAlchemy==1.3.13
-tox>=3.15.0
-tox-pyenv>=1.1.0
-urllib3==1.25.7
-xlrd==1.2.0
-XlsxWriter==1.2.7
-xl2dict>=0.1.5

+ 0 - 5
run_test.sh

@@ -1,5 +0,0 @@
-echo "Running tests"
-coverage run -m pytest
-echo "Building coverage"
-coverage html
-echo "HTML Coverage: ${PWD}/htmlcov/index.html"

File diff suppressed because it is too large
+ 0 - 1
testrun.json


+ 0 - 42
windows/baangtSetupWindows.iss

@@ -1,42 +0,0 @@
-; Script generated by the Inno Script Studio Wizard.
-; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
-
-[Setup]
-; NOTE: The value of AppId uniquely identifies this application.
-; Do not use the same AppId value in installers for other applications.
-; (To generate a new GUID, click Tools | Generate GUID inside the IDE.)
-AppId={{CC5A1C52-ADE4-4579-8D7F-8B1CA5C171FF}
-AppName=baangt
-AppVersion=2020.4.7rc4
-;AppVerName=baangt 2020.4.7rc4
-AppPublisher=Buhl Consulting Ltd
-AppPublisherURL=https://baangt.org
-AppSupportURL=https://baangt.org
-AppUpdatesURL=https://baangt.org
-DefaultDirName={pf}\baangt
-DefaultGroupName=baangt
-AllowNoIcons=yes
-LicenseFile=C:\Users\buhl\git\baangt\LICENSE
-OutputBaseFilename=baangtsetup
-Compression=lzma
-SolidCompression=yes
-
-[Tasks]
-Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
-Name: "quicklaunchicon"; Description: "{cm:CreateQuickLaunchIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked; OnlyBelowVersion: 0,6.1
-
-
-[Icons]
-Name: "{group}\baangt"; Filename: "{app}\baangt.exe"
-Name: "{commondesktop}\baangt"; Filename: "{app}\baangt.exe"; Tasks: desktopicon
-Name: "{userappdata}\Microsoft\Internet Explorer\Quick Launch\baangt"; Filename: "{app}\baangt.exe"; Tasks: quicklaunchicon
-
-[Run]
-Filename: "{app}\baangt.exe"; Description: "{cm:LaunchProgram,baangt}"; Flags: nowait postinstall skipifsilent
-
-[Files]
-Source: "..\dist\baangt\*"; DestDir: "{app}"; Flags: ignoreversion createallsubdirs recursesubdirs
-Source: "..\examples\*.*"; DestDir: "{%USERPROFILE}\baangt\examples"; Flags: ignoreversion createallsubdirs recursesubdirs
-
-[Dirs]
-Name: "{%USERPROFILE}\baangt"

+ 0 - 40
windows/baangtWindows.spec

@@ -1,40 +0,0 @@
-# -*- mode: python ; coding: utf-8 -*-
-
-block_cipher = None
-
-a = Analysis(['..\\baangtIA.py'],
-             pathex=['windows'],
-             binaries=[],
-             datas=[('../baangt/ressources/baangtLogo2020Small.png', 'ressources'),
-                    ('../examples/', 'examples'),
-                    ('../browsermob-proxy','browsermob-proxy')],
-             hiddenimports=["cffi", "pyQT5"],
-             hookspath=[],
-             runtime_hooks=[],
-             excludes=[],
-             win_no_prefer_redirects=False,
-             win_private_assemblies=False,
-             cipher=block_cipher,
-             noarchive=False)
-pyz = PYZ(a.pure, a.zipped_data,
-             cipher=block_cipher)
-exe = EXE(pyz,
-          a.scripts,
-          a.binaries,
-          [],
-          exclude_binaries=True,
-          windowed=True,
-          name='baangt',
-          debug=all,
-          bootloader_ignore_signals=False,
-          strip=False,
-          upx=True,
-          console=True )
-coll = COLLECT(exe,
-               a.binaries,
-               a.zipfiles,
-               a.datas,
-               strip=False,
-               upx=True,
-               upx_exclude=[],
-               name='baangt')