diff --git a/project-repo/review-process.md b/project-repo/review-process.md index 27fd5af9..f0099608 100644 --- a/project-repo/review-process.md +++ b/project-repo/review-process.md @@ -2,88 +2,232 @@ ## General principles -In general direct pushes to the `main` branch are not allowed, and Pull Requests should be used. The only allowed push to `main` is to push the tag for a new release. Optionally, the direct push of an emergency fix to `main` may be allowed, but a good reason is needed for bypassing the pull request requirement. - -> Even in those cases, it’s still better to go through a pull request to ensure that automated tests and checks are running, even if it’s merged without further reviews. - -Each pull request should have at least one reviewer. This is our way of ensuring quality and double-checking on what we produce as a team. This applies both to the projects we are the upstream for or we are a contributor to, as well as to packaging (snap or .deb) changes. - -When contributing to the projects we are not the upstream for, we of course follow upstream's methodology for code reviews. However, the guidelines below can still help ensure that any submission made by a Canonical contributor is of high quality right from the get-go. - -If a change has been done in a pair-programming context, that can be considered as already reviewed. However, sanity checks (see the first steps) are still expected to be carried out by both parties, before merging. +In general direct pushes to the `main` branch are not allowed, and +Pull Requests should be used. The only allowed push to `main` is to +push the tag for a new release. Optionally, the direct push of an +emergency fix to `main` may be allowed, but a good reason is needed +for bypassing the pull request requirement. + +> Even in those cases, it's still better to go through a pull request +> to ensure that automated tests and checks are running, even if it's +> merged without further reviews. + +Each pull request should have at least one reviewer. This is our way +of ensuring quality and double-checking on what we produce as a team. +This applies both to the projects we are the upstream for or we are a +contributor to, as well as to packaging (snap or .deb) changes. + +When contributing to the projects we are not the upstream for, +we of course follow upstream's methodology for code reviews. +However, the guidelines below can still help ensure that any +submission made by a Canonical contributor is of high quality +right from the get-go. + +If a change has been done in a pair-programming context, that can be +considered as already reviewed. However, sanity checks (see the first +steps) are still expected to be carried out by both parties, before +merging. ### Policies and guidelines -The aim of these guidances is to ease the maintainability of code and the project as a whole, in the long run. Double questioning an approach or providing suggestions should always follow this trail of thoughts: How will it make the code easier to understand and maintain, and more robust? Can we avoid premature optimisations towards that goal? - -* Always prefer to land multiple small pull requests, rather than a large pull request with multiple unrelated changes mixed in. Both your future self and the reviewer will thank you. -* As a consequence, we don’t encourage "long-lived feature branch". -* Any new code and fixes should be accompanied with new or modified existing tests. Only fixes for rare crashes that are exceptionally difficult to reproduce are acceptable without accompanying automated tests. If that is the case for your particular fix, please ensure the description reflects this. -* Any commits merged in the `main` branch need to pass the tests. This is our mainline. You can split your changes into multiple, more discrete modifications, so as to help give additional context when one needs to dig into the git history later on. -* Reviewing a change is serious business. A generic "LGTM" should be more the exception than the rule. Comments are encouraged and should be seen as positive feedback to broaden our understanding of the issue at hand. Accordingly, the commenter should strive to make their feedback as constructive as possible. -* When there are multiple reviewers on a PR, please pay attention to the comments and answers of your peers! This helps building shared understanding of the code. All of us could always learn something new. :-) -* Try to not let an open pull request remain without activity for more than 3 days. Especially, try to be responsive within a day for contributions from external contributors. Stale branches should be either worked on or deleted. This helps ensure that the repository has an easy and actionable pull request review list. -* Automated pull requests for dependency updates should be looked at with scrutiny: tests passing does not necessarily imply that the pull request is good to merge. Examine at the upstream changelog and/or commit logs, and assess the impact to see if we would benefit from the changes, and generally aim to reduce technical debt. If any changes are required to make the tests pass again for such a pull request, those changes should be added to the very same branch, so that merging the pull request would not leave the tests broken. +The aim of these guidances is to ease the maintainability of code and +the project as a whole, in the long run. Double questioning an +approach or providing suggestions should always follow this trail of +thoughts: How will it make the code easier to understand and maintain, +and more robust? Can we avoid premature optimisations towards that +goal? + +* Always prefer to land multiple small pull requests, rather than a + large pull request with multiple unrelated changes mixed in. Both + your future self and the reviewer will thank you. + +* As a consequence, we don't encourage "long-lived feature branch". + +* Any new code and fixes should be accompanied with new or modified + existing tests. Only fixes for rare crashes that are exceptionally + difficult to reproduce are acceptable without accompanying automated + tests. If that is the case for your particular fix, please ensure + the description reflects this. + +* Any commits merged in the `main` branch need to pass the tests. + This is our mainline. You can split your changes into multiple, + more discrete modifications, so as to help give additional context + when one needs to dig into the git history later on. + +* Reviewing a change is serious business. A generic "LGTM" should be + more the exception than the rule. Comments are encouraged and + should be seen as positive feedback to broaden our understanding of + the issue at hand. Accordingly, the commenter should strive to make + their feedback as constructive as possible. + +* When there are multiple reviewers on a pull request, please pay + attention to the comments and answers of your peers! This helps + building shared understanding of the code. All of us could always + learn something new. :-) + +* Try to not let an open pull request remain without activity for more + than 3 days. Especially, try to be responsive within a day for + contributions from external contributors. Stale branches should be + either worked on or deleted. This helps ensure that the repository + has an easy and actionable pull request review list. + +* Automated pull requests for dependency updates should be looked at + with scrutiny: tests passing does not necessarily imply that the + pull request is good to merge. Examine at the upstream changelog + and/or commit logs, and assess the impact to see if we would benefit + from the changes, and generally aim to reduce technical debt. If + any changes are required to make the tests pass again for such a + pull request, those changes should be added to the very same branch, + so that merging the pull request would not leave the tests broken. ### Merge workflow -We should use the **merge commit workflow** (not squash and merge). The history of each branch should make sense on its own, with few or no "fixup" commits. +We should use the **merge commit workflow** (not squash and merge). +The history of each branch should make sense on its own, with few or +no "fixup" commits. + +Individual commits do not follow [conventional commits][convcommits] +and are free form. However, for better readability of the mainline, +the merge commit in the mainline should follow [conventional +commits][convcommits]. -Individual commits do not follow [conventional commits][convcommits] and are free form. However, for better readability of the mainline, the merge commit in the mainline should follow [conventional commits][convcommits]. +For this, the pull request title should be of the form: -For this, the PR title should be of the form: `(): `. The scope is optional. The first comment will be the commit body and needs to reference any associated JIRA card on the last line, separated by a blank line. +``` +(): +``` + +The scope is optional. The first comment will be the commit body +and needs to reference any associated Jira card on the last line, +separated by a blank line. The accepted `types` are: -* `build`: Changes that affect the build system or external dependencies +* `build`: Changes that affect the build system or external + dependencies * `ci`: Changes to the CI configuration * `deps`: Update dependencies * `docs`: Documentation only changes * `feat`: A new feature * `fix`: A bug fix -* `maint`: Changes that do not affect the meaning of the code (white-spaces, formatting, missing semi-colons, typo fixes, etc). Gardening… +* `maint`: Changes that do not affect the meaning of the code + (white-spaces, formatting, missing semi-colons, typo fixes, etc). + Gardening... * `perf`: A code change that improves performance -* `refactor`: A code change that neither fixes a bug nor adds a feature (nor add bugs)! +* `refactor`: A code change that neither fixes a bug nor adds a + feature (nor add bugs)! * `tests`: Adding missing tests or correcting existing tests -> TODO: have a document defining what to configure in every new GitHub project (webhook for Jira, merge commit + default commit template, automated merged branch deletion). +> TODO: have a document defining what to configure in every new GitHub +> project (webhook for Jira, merge commit + default commit template, +> automated merged branch deletion). ## Process -Don't hesitate to comment and reiterate! When you engage with a pull request, the expectation is that you remain responsive, and not let the pull request go stale. +Don't hesitate to comment and reiterate! When you engage with a pull +request, the expectation is that you remain responsive, and not let +the pull request go stale. + +1. Team members should [sign their commits][sign-commits]. While this + may be difficult to enforce for community contributions, we expect + team members to _always_ follow this guideline. + +2. The pull request should always be opened as a draft, to avoid + notifying the reviewers before the tests pass and the pull request + is fully ready for review. Ensure the description of changes is + readable and understandable by prospective reviewers. The pull + request should be linked to a Jira card, using `UDENG-XXX` in its + description, so that Jira links against it. + +3. The pull request diff should be examined by the submitter - it may + happen that a debug print statement was left in, or a non-related + change was accidentally `git add`ed. Please at least do a quick + visual scan to help catch such potential issues/typos. + +4. The submitter should wait and check for the CI feedback to indicate + a pass (linting, testing, building, automated generation, etc.). + Static security check tool (part of linting if possible) is part of + that stage. Feel free to ping a team member to ask for help before + a formal review, if something doesn't pass (like tests) and you + don't have any ideas on how to fix it. In such cases, the pull + request description should mention that the tests don't pass. -1. Team members should [sign their commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits). While this may be difficult to enforce for community contributions, we expect team members to _always_ follow this guideline. -1. The pull request should always be opened as a draft, to avoid notifying the reviewers before the tests pass and the pull request is fully ready for review. Ensure the description of changes is readable and understandable by prospective reviewers. The pull request should be linked to a Jira card, using `UDENG-XXX` in its description, so that Jira links against it. -1. The pull request diff should be examined by the submitter - it may happen that a debug print statement was left in, or a non-related change was accidentally `git add`ed. Please at least do a quick visual scan to help catch such potential issues/typos. -1. The submitter should wait and check for the CI feedback to indicate a pass (linting, testing, building, automated generation, etc.). Static security check tool (part of linting if possible) is part of that stage. Feel free to ping a team member to ask for help before a formal review, if something doesn’t pass (like tests) and you don’t have any ideas on how to fix it. In such cases, the pull request description should mention that the tests don’t pass. > You should squash fixes at this staged with the relevant commits. -1. Code coverage report should be posted, and any significant drop in coverage should be questioned or commented on. -1. Request for review by marking the pull request as ready for review. The team that is responsible for this pull request (automatically pre-populated in `CODEOWNERS`) will be notified. - > If the code change was developed in pair programming with another member of the owning team, the authors of the change can approve the pull request directly, after looking over the diff to make sure it's generally in good shape. -1. Another member of the team does a code review: - * Suggestion are to be as comments for the corresponding line(s) in the diff. - > If the diff is very large, using the "mark as read" check-mark on GitHub can help. Launchpad currently does not have a similar feature, unfortunately. - * Code coverage is to be examined to see if there are new areas in the code that are not covered by the changes, which could have been covered. - * Suggestions for revising/rewording history of the pull request's commits should be put in the general comment section. - > If there are too many changes or strong disagreement on the approach, feel free to jump on a call, or even start a pair programming session. -1. The reviewer approves the pull request or requests further changes. + +5. Code coverage report should be posted, and any significant drop in + coverage should be questioned or commented on. + +6. Request for review by marking the pull request as ready for review. + The team that is responsible for this pull request (automatically + pre-populated in `CODEOWNERS`) will be notified. + + > If the code change was developed in pair programming with another + > member of the owning team, the authors of the change can approve + > the pull request directly, after looking over the diff to make + > sure it's generally in good shape. + +7. Another member of the team does a code review: + + * Suggestion are to be as comments for the corresponding line(s) + in the diff. + + > If the diff is very large, using the "mark as read" check-mark + > on GitHub can help. Launchpad currently does not have a + > similar feature, unfortunately. + + * Code coverage is to be examined to see if there are new areas in + the code that are not covered by the changes, which could have + been covered. + + * Suggestions for revising/rewording history of the pull request's + commits should be put in the general comment section. + + > If there are too many changes or strong disagreement on the + > approach, feel free to jump on a call, or even start a pair + > programming session. + +8. The reviewer approves the pull request or requests further changes. ### If changes are required -1. The submitter either argues on each comment or gives a 👍 on each suggestion to ensure those are acknowledged. -1. Any changes should be appended to the commit history, ready to be squashed into potential relevant existing commits. The new changes should be kept as separate commits for the duration of the review, to ease further reviews, as most tools don’t have a good diff-of-diffs view. -1. The submitter assesses the CI test results and code coverage, and requests another review. -1. The reviewer examines the changes, and resolves any fixed comments or agrees with the arguments from the submitter. - > It’s always the person opening a comment, **not** the submitter of the pull request, that should resolve the comments, once they are happy with the particular change. +1. The submitter either argues on each comment or gives a 👍 on each + suggestion to ensure those are acknowledged. + +2. Any changes should be appended to the commit history, ready to be + squashed into potential relevant existing commits. The new changes + should be kept as separate commits for the duration of the review, + to ease further reviews, as most tools don't have a good + diff-of-diffs view. -This cycle is repeated if there are further comments or if some are still left open. Once all comments are resolved, the reviewer should approve. +3. The submitter assesses the CI test results and code coverage, and + requests another review. + +4. The reviewer examines the changes, and resolves any fixed comments + or agrees with the arguments from the submitter. + + > It's always the person opening a comment, **not** the submitter + > of the pull request, that should resolve the comments, once they + > are happy with the particular change. + +This cycle is repeated if there are further comments or if some are +still left open. Once all comments are resolved, the reviewer should +approve. ### Final merge -1. The submitter will squash the optional additional commits and force push to the pull request. -1. The submitter examines one last time the diff, the test results, and the coverage. -1. The submitter is in charge of merging the changes by clicking the merge button. +1. The submitter will squash the optional additional commits and force + push to the pull request. + +2. The submitter examines one last time the diff, the test results, + and the coverage. + +3. The submitter is in charge of merging the changes by clicking the + merge button. -> Note: The final merge steps may be proceeded by the reviewer if we are in a hurry. However, it’s preferable to allow the submitter to do the actual merging, as an additional way of acknowledging their work and enjoying the dopamine boost. :-) +> Note: The final merge steps may be proceeded by the reviewer if we +> are in a hurry. However, it's preferable to allow the submitter to +> do the actual merging, as an additional way of acknowledging their +> work and enjoying the dopamine boost. :-) [convcommits]: https://www.conventionalcommits.org/en/v1.0.0/ +[sign-commits]: https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits diff --git a/project-repo/style-guide.md b/project-repo/style-guide.md new file mode 100644 index 00000000..7fad2b77 --- /dev/null +++ b/project-repo/style-guide.md @@ -0,0 +1,66 @@ +Style guide +=========== + +This document includes a series of guidelines and recommendations for +formatting code and documentation, aimed mainly at the [new] projects +we are the upstream for. Otherwise, for existing projects and those +where we are not the upstream, the project's existing guidelines and +policies should take precedence over those outlined in this document. + +Code +---- + +For new projects where we are the upstream, we aim to follow community +formatting conventions for the programming language where available. +If the language has tools for automatically formatting code, they +should be used. For example, for Rust, this would be `rustfmt`; and +for Go, we prefer `gofumt`, a stricter `gofmt`. + +Documentation +------------- + +For new projects where we are the upstream, please aim to follow these +formatting conventions for markup text: + +- Try to limit lines to about 70 characters if the markup language is + newline-agnostic in paragraph texts, meaning that breaking a long + line in a paragraph does not affect the generated/rendered output. + Markdown, reStructuredText, (La)TeX, and HTML are some instances of + such markup languages. On the other hand, the MoinMoin wiki syntax + as seen on the Ubuntu Wiki is sensitive to line breaks, and each + newline corresponds to a new paragraph. + + Keeping lines short when possible has various advantages, including + better readability of the text, more readable diffs on GitHub and + other tools, avoiding line wrapping especially on smaller screens, + and a clearer view when positioning multiple columns/windows of text + side-by-side on larger screens. + +- Use two spaces (instead of one) at the end of a sentence after the + ending mark (period, exclamation mark, or question mark) if the + markup language is space-agnostic, meaning that multiple spaces + are collapsed to one in the generated/rendered output. Markdown, + reStructuredText, (La)TeX, and HTML are some instances of such + markup languages. + + Using two spaces to separate sentences has several advantages, + including: + + - Unambiguously separating sentences. This is especially useful + when writing prose languages like English where a period might + appear in the middle of a sentence, for example when using "i.e." + or "e.g.", and separating sentences with two spaces indicates both + to the human reader as well as the computer that the sentence has + ended there. + + - Formatting and text manipulation commands in various text editors + like GNU Emacs and Vim default to expecting two spaces at the end + of sentences. For example, Emacs's `fill-paragraph` and Vim's + `gqq` (wrap current line) and `gqip` (wrap current paragraph) + expect sentences to be separated by two spaces. + + - Visually, separating sentences by two spaces makes it easier for + the human eye to more easily find where a sentence ends and the + next one begins. This is especially true when reading text that + is set in a monospace typeface, as is the case for the majority of + people who read and write documentation markup in a text editor. diff --git a/project-repo/technology-choices.md b/project-repo/technology-choices.md index 9a890391..c10043b4 100644 --- a/project-repo/technology-choices.md +++ b/project-repo/technology-choices.md @@ -2,42 +2,70 @@ ## Languages -When patching and working on any upstream project, the main principle is to adhere to upstream's coding style and principles. This could be C/GLib, Python, Rust, etc. +When patching and working on any upstream project, the main principle +is to adhere to upstream's coding style and principles. This could be +C/GLib, Python, Rust, etc. -However, for the new projects we are the upstream for, they should fall under one of the following accepted technology categories for our team: +However, for the new projects we are the upstream for, they should +fall under one of the following accepted technology categories for +our team: * New GUI: Dart/Flutter * New service/CLI: Go -* New shared libraries, and service/CLI or bindings to an existing C project (experimentally): Rust +* New shared libraries, and service/CLI or bindings to an existing + C project (experimentally): Rust -If you find yourself wanting to implement a new project using technologies outside of the above, please reach out to your manager - and explain your reasoning - for an approval. +If you find yourself wanting to implement a new project using +technologies outside of the above, please reach out to your +manager - and explain your reasoning - for an approval. ## Templates -Templates are available for any of the above technologies. Please see [How to use our templates](how-to-use-templates.md). +Templates are available for any of the above technologies. +Please see [How to use our templates](how-to-use-templates.md). ## General design -The internal and external logic for a system should be separated. The service / (UI|external stimuli) split approach has been used many times with success in the team. The main idea is: +The internal and external logic for a system should be separated. +The service / (UI|external stimuli) split approach has been used +many times with success in the team. The main idea is: * Most of the business logic lives inside the service. -* The UI (CLI, GUI, or the exposed API) is clearly defined and contains mostly the graphical representation, piloted by the service itself. +* The UI (CLI, GUI, or the exposed API) is clearly defined and + contains mostly the graphical representation, piloted by the + service itself. -This allows simple mocking between each stack, and enables easier reuse of our business logic across multiple interface representations. +This allows simple mocking between each stack, and enables easier +reuse of our business logic across multiple interface representations. ## APIs -APIs should be versioned. `v0` can be used to indicate an unversioned API at first during the initial development. +APIs should be versioned. `v0` can be used to indicate an unversioned +API at first during the initial development. -> Note that APIs using **protobuf** should not need versioning, especially if used with gRPC, as the syntax enforces backward compatibility rules. +> Note that APIs using **protobuf** should not need versioning, +> especially if used with gRPC, as the syntax enforces backward +> compatibility rules. ## Inter-process communication Two technologies have been used in our team for IPC so far: -* D-Bus (GNU/Linux communication event bus): Should be used when interoperating with existing services. It supports a privileged control mechanism and apparmor mediation, which makes it suitable for API access control in snaps. It should use D-Bus activation to spin up services when desired. Please see the article ["How to Version D-Bus Interfaces Properly and Why Using / as Service Entry Point Sucks"][dbus-versioning] for tips on versioning a D-Bus API. -* gRPC over Unix socket (or TCP): Helps to have a defined and strongly typed API using protobuf. Can only mediate per user (which uid/gid process is talking to my Unix socket). It should use Unix socket activation to spin up services when desired. - -> Note: mediation over snap for any API using Unix socket: all snap services are root, so service to service communication needs another mediation protocol. FIXME: OPEN_QUESTION +* D-Bus (GNU/Linux communication event bus): Should be used when + interoperating with existing services. It supports a privileged + control mechanism and apparmor mediation, which makes it suitable + for API access control in snaps. It should use D-Bus activation to + spin up services when desired. Please see the article ["How to + Version D-Bus Interfaces Properly and Why Using / as Service Entry + Point Sucks"][dbus-versioning] for tips on versioning a D-Bus API. + +* gRPC over Unix socket (or TCP): Helps to have a defined and strongly + typed API using protobuf. Can only mediate per user (which uid/gid + process is talking to my Unix socket). It should use Unix socket + activation to spin up services when desired. + +> Note: mediation over snap for any API using Unix socket: all snap +> services are root, so service to service communication needs another +> mediation protocol. FIXME: OPEN_QUESTION [dbus-versioning]: https://0pointer.de/blog/projects/versioning-dbus.html diff --git a/project-repo/templates/common/CONTRIBUTING.md b/project-repo/templates/common/CONTRIBUTING.md index 5d781e60..251f2de3 100644 --- a/project-repo/templates/common/CONTRIBUTING.md +++ b/project-repo/templates/common/CONTRIBUTING.md @@ -1,10 +1,18 @@ # Contributing to PROJECT_TODO -A big welcome and thank you for considering contributing to PROJECT_TODO and Ubuntu! It’s people like you that make it a reality for users in our community. +A big welcome and thank you for considering contributing to +PROJECT_TODO and Ubuntu! It's people like you that make it a +reality for users in our community. -Reading and following these guidelines will help us make the contribution process easy and effective for everyone involved. It also communicates that you agree to respect the time of the developers managing and developing this project. In return, we will reciprocate that respect by addressing your issue, assessing changes, and helping you finalize your pull requests. +Reading and following these guidelines will help us make the +contribution process easy and effective for everyone involved. +It also communicates that you agree to respect the time of the +developers managing and developing this project. In return, +we will reciprocate that respect by addressing your issue, +assessing changes, and helping you finalize your pull requests. -These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. +These are mostly guidelines, not rules. Use your best judgment, +and feel free to propose changes to this document in a pull request. ## Quicklinks @@ -18,48 +26,97 @@ These are mostly guidelines, not rules. Use your best judgment, and feel free to ## Code of Conduct -We take our community seriously and hold ourselves and other contributors to high standards of communication. By participating and contributing to this project, you agree to uphold our [Code of Conduct](https://ubuntu.com/community/code-of-conduct). +We take our community seriously and hold ourselves and other +contributors to high standards of communication. By participating +and contributing to this project, you agree to uphold our +[Code of Conduct](https://ubuntu.com/community/code-of-conduct). ## Getting Started -Contributions are made to this project via Issues and Pull Requests (PRs). A few general guidelines that cover both: +Contributions are made to this project via "issues" and +"pull requests". A few general guidelines that cover both: -* To report security vulnerabilities, please use the advisories page of the repository and not a public bug report. Please use [launchpad private bugs](https://bugs.launchpad.net/ubuntu/+source/PROJECT_TODO/+filebug) which is monitored by our security team. On ubuntu machine, it’s best to use `ubuntu-bug PROJECT_TODO` to collect relevant information. FIXME: snap? -* Search for existing Issues and PRs on this repository before creating your own. -* We work hard to makes sure issues are handled in a timely manner but, depending on the impact, it could take a while to investigate the root cause. A friendly ping in the comment thread to the submitter or a contributor can help draw attention if your issue is blocking. -* If you've never contributed before, see [this Ubuntu discourse post](https://discourse.ubuntu.com/t/contribute/26) for resources and tips on how to get started. +* To report security vulnerabilities, please use the advisories + page of the repository and not a public bug report. Please use + [Launchpad private bugs](https://bugs.launchpad.net/ubuntu/+source/PROJECT_TODO/+filebug) + which is monitored by our security team. On an Ubuntu machine, + it's best to use `ubuntu-bug PROJECT_TODO` to collect relevant + information. FIXME: snap? + +* Search for existing issues and pull requests on this repository + before creating your own. + +* We work hard to makes sure issues are handled in a timely manner, + but depending on the impact, it could take a while to investigate + the root cause. A friendly ping in the comment thread to the + submitter or a contributor can help draw attention if your issue + is blocking. + +* If you've never contributed before, see + [this Ubuntu discourse post](https://discourse.ubuntu.com/t/contribute/26) + for resources and tips on how to get started. ### Issues -Issues should be used to report problems with the software, request a new feature, or to discuss potential changes before a PR is created. When you create a new Issue, a template will be loaded that will guide you through collecting and providing the information we need to investigate. +Issues should be used to report problems with the software, request a +new feature, or to discuss potential changes before a pull request is +created. When you create a new issue, a template will be loaded that +will guide you through collecting and providing the information we +need to investigate. -If you find an Issue that addresses the problem you're having, please add your own reproduction information to the existing issue rather than creating a new one. Adding a [reaction](https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) can also help be indicating to our maintainers that a particular problem is affecting more than just the reporter. +If you find an issue that addresses the problem you're having, please +add your own reproduction information to the existing issue rather +than creating a new one. Adding a +[reaction](https://github.blog/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) +can also help be indicating to our maintainers that a particular +problem is affecting more than just the reporter. ### Pull Requests -PRs to our project are always welcome and can be a quick way to get your fix or improvement slated for the next release. In general, PRs should: +Pull requests to our project are always welcome and can be a quick +way to get your fix or improvement slated for the next release. +In general, pull requests should: + +* Only fix/add the functionality in question **OR** address + wide-spread space/style issues, not both. -* Only fix/add the functionality in question **OR** address wide-spread whitespace/style issues, not both. * Add unit or integration tests for fixed or changed functionality. -* Address a single concern in the least number of changed lines as possible. -* Include documentation in the repo or on our [docs site](https://github.com/canonical/PROJECT_TODO/wiki). -* Be accompanied by a complete Pull Request template (loaded automatically when a PR is created). -For changes that address core functionality or would require breaking changes (e.g. a major release), it's best to open an Issue to discuss your proposal first. This is not required but can save time creating and reviewing changes. +* Address a single concern in the least number of changed lines as + possible. + +* Include documentation in the repo or on our + [docs site](https://github.com/canonical/PROJECT_TODO/wiki). + +* Be accompanied by a complete pull request template (loaded + automatically when a pull request is created). + +For changes that address core functionality or would require breaking +changes (e.g. a major release), it's best to open an issue to discuss +your proposal first. This is not required but can save time creating +and reviewing changes. -In general, we follow the ["fork-and-pull" Git workflow](https://github.com/susam/gitpr) +In general, we follow the +["fork-and-pull" Git workflow](https://github.com/susam/gitpr). -1. Fork the repository to your own Github account -1. Clone the project to your machine -1. Create a branch locally with a succinct but descriptive name -1. Commit changes to the branch -1. Following any formatting and testing guidelines specific to this repo -1. Push changes to your fork -1. Open a PR in our repository and follow the PR template so that we can efficiently review the changes. +1. Fork the repository to your own GitHub account +2. Clone the project to your machine +3. Create a branch locally with a succinct but descriptive name +4. Commit changes to the branch +5. Following any formatting and testing guidelines specific to this + repo +6. Push changes to your fork +7. Open a pull request in our repository and follow the pull request + template so that we can efficiently review the changes. -> PRs will trigger unit and integration tests with and without race detection, linting and formatting validations, static and security checks, freshness of generated files verification. All the tests must pass before merging in main branch. +> pull requests will trigger unit and integration tests with and +> without race detection, linting and formatting validations, static +> and security checks, freshness of generated files verification. +> All the tests must pass before merging in main branch. -Once merged to the main branch, `po` files and any documentation change will be automatically updated. Those are thus not necessary in the pull request itself to minimize diff review. +Once merged to the main branch, `po` files and any documentation +change will be automatically updated. Those are thus not necessary +in the pull request itself to minimize diff review. ## Contributing to the code @@ -73,28 +130,39 @@ TODO ### About the testsuite -The project includes a comprehensive testsuite made of unit and integration tests. All the tests must pass before the review is considered. If you have troubles with the testsuite, feel free to mention it on your PR description. +The project includes a comprehensive testsuite made of unit and +integration tests. All the tests must pass before the review is +considered. If you have troubles with the testsuite, feel free to +mention it on your pull request description. TODO -The test suite must pass before merging the PR to our main branch. Any new feature, change or fix must be covered by corresponding tests. +The test suite must pass before merging the pull request to our +main branch. Any new feature, change or fix must be covered by +corresponding tests. ### Code style -This project follow the TODO code-style. For more informative information about the code style in use, please check: +This project follow the TODO code-style. For more informative +information about the code style in use, please check: * For go: * For Flutter: -* For Rust: … +* For Rust: ... ## Contributor License Agreement -It is required to sign the [Contributor License Agreement](https://ubuntu.com/legal/contributors) in order to contribute to this project. +It is required to sign the +[Contributor License Agreement](https://ubuntu.com/legal/contributors) +in order to contribute to this project. -An automated test is executed on PRs to check if it has been accepted. +An automated test is executed on pull requests to check if it has been +accepted. This project is covered by [THIS LICENSE](LICENSE). ## Getting Help -Join us in the [Ubuntu Community](https://discourse.ubuntu.com/c/desktop/8) and post your question there with a descriptive tag. +Join us in the +[Ubuntu Community](https://discourse.ubuntu.com/c/desktop/8) +and post your question there with a descriptive tag. diff --git a/project-repo/templates/common/README.md b/project-repo/templates/common/README.md index 7812b5ca..fae246ed 100644 --- a/project-repo/templates/common/README.md +++ b/project-repo/templates/common/README.md @@ -24,25 +24,41 @@ [![Reference documentation][reference-documentation-image]][reference-documentation-url] [![Go Report Card][goreport-image]][goreport-url] -This is the code repository for **PROJECT_TODO**, the …. +This is the code repository for **PROJECT_TODO**, the ... . TODO: More general description about the project. -For general details, including [installation](TODO link to installation instruction) and [Getting started](TODO link to getting started instructions) guides, head over to our [PROJECT_TODO documentation](link to project documentation). +For general details, including +[installation](TODO link to installation instruction) +and [Getting started](TODO link to getting started instructions) +guides, head over to our +[PROJECT_TODO documentation](link to project documentation). ## Troubleshooting -TODO: Add details on how to debug this project, where to increase verbosity, how to find logs, how to run in debug mode. +TODO: Add details on how to debug this project, where to increase +verbosity, how to find logs, how to run in debug mode. ## Get involved -This is an [open source](LICENSE) project and we warmly welcome community contributions, suggestions, and constructive feedback. If you're interested in contributing, please take a look at our [Contribution guidelines](CONTRIBUTING.md) first. +This is an [open source](LICENSE) project and we warmly welcome +community contributions, suggestions, and constructive feedback. +If you're interested in contributing, please take a look at our +[Contribution guidelines](CONTRIBUTING.md) first. -- to report an issue, please file a bug report against our repository, using a bug template. -- for suggestions and constructive feedback, report a feature request bug report, using the proposed template. +- to report an issue, please file a bug report against our repository, + using a bug template. + +- for suggestions and constructive feedback, report a feature request + bug report, using the proposed template. ## Get in touch -We're friendly! We have a community forum at [https://discourse.ubuntu.com](https://discourse.ubuntu.com) where we discuss feature plans, development news, issues, updates and troubleshooting. +We're friendly! We have a community forum at +[https://discourse.ubuntu.com](https://discourse.ubuntu.com) +where we discuss feature plans, development news, issues, updates, +and troubleshooting. -For news and updates, follow the [Ubuntu twitter account](https://twitter.com/ubuntu) and on [Facebook](https://www.facebook.com/ubuntu). +For news and updates, follow the +[Ubuntu twitter account](https://twitter.com/ubuntu) +and on [Facebook](https://www.facebook.com/ubuntu). diff --git a/project-repo/tests.md b/project-repo/tests.md index 4a7cc8a2..917cf509 100644 --- a/project-repo/tests.md +++ b/project-repo/tests.md @@ -3,107 +3,220 @@ ## General principles * Any new code/feature should be accompanied by tests. -* Tests should be baked by coverage report done in CodeCov. Any uncovered branch should be understood, and covered if needed. We should aim for achieving the highest coverage possible. -* Prefer package/module level testing to unit tests. Try to test only with the external API of this scope (if the language allows you to do this and only poke few holes, try it), and only go on the unit test level when this is needed (a complex, local, function). This approach allows you to assess your API, and is a good documentation for anyone who needs to use this package. -* We do not encourage TDD, which forces crystalising the API way too early in the design process. We thus compensate the lack of guided code path testability assurance by - * using coverage extensively to ensure that the tests are going through the desired pass; and - * reversing, the "want" condition, to ensure that the test is failing as expected. It’s also a good place to assess and check that the returned error and log messages make sense. -* Do not test only the happy path, and ensure that at least one error case - if the method can return an error - is covered. For instance, try to consider how you can trigger more error cases with an inconsistent filesystem state, e.g. a file being read-only when you are expected to write to it, etc.). -* One easy way to expand and facilitate adding tests in the future is to use a combination of table testing and golden files. -* Other than when bootstrapping a project, the tests should always pass in the main branch. + +* Tests should be baked by coverage report done in CodeCov. + Any uncovered branch should be understood, and covered if needed. + We should aim for achieving the highest coverage possible. + +* Prefer package/module level testing to unit tests. Try to test only + with the external API of this scope (if the language allows you to + do this and only poke few holes, try it), and only go on the unit + test level when this is needed (a complex, local, function). This + approach allows you to assess your API, and is a good documentation + for anyone who needs to use this package. + +* We do not encourage TDD, which forces crystalising the API way too + early in the design process. We thus compensate the lack of guided + code path testability assurance by + + * using coverage extensively to ensure that the tests are going + through the desired pass; and + + * reversing, the "want" condition, to ensure that the test is + failing as expected. It's also a good place to assess and check + that the returned error and log messages make sense. + +* Do not test only the happy path, and ensure that at least one + error case - if the method can return an error - is covered. For + instance, try to consider how you can trigger more error cases with + an inconsistent filesystem state, e.g. a file being read-only when + you are expected to write to it, etc.). + +* One easy way to expand and facilitate adding tests in the future is + to use a combination of table testing and golden files. + +* Other than when bootstrapping a project, the tests should always + pass in the main branch. ### Parallel tests -Run your tests in parallel. This in part helps check that a test is not potentially impacted by the load on the system, nor by any shared or global state. +Run your tests in parallel. This in part helps check that a test is +not potentially impacted by the load on the system, nor by any shared +or global state. -Also, try to run the tests in random order. For instance, in Go, implementing table testing with a `map[string]struct{}` helps ensure we don’t always rely on potential leftover test state. +Also, try to run the tests in random order. For instance, in Go, +implementing table testing with a `map[string]struct{}` helps ensure +we don't always rely on potential leftover test state. ### Race detector -Enable race detection if your language supports it. You should always first run your test without the race detector. Then, re-run your tests with race detection enabled. +Enable race detection if your language supports it. You should always +first run your test without the race detector. Then, re-run your +tests with race detection enabled. ### Location of the tests -* The unit, package (or module), and integration testing should be in the upstream repository. -* The unit and package tests are in separate files, alongside the code they are actually testing. -* Integration tests should be in a dedicated directory, next to the command line they are testing, or an external entry point with an explicit "integration tests" name. +* The unit, package (or module), and integration testing should be in + the upstream repository. + +* The unit and package tests are in separate files, alongside the code + they are actually testing. + +* Integration tests should be in a dedicated directory, next to the + command line they are testing, or an external entry point with an + explicit "integration tests" name. + * End-to-end tests should be at the root of the project. -Each test should place any fixtures or third party assets next to the assets, within a `testdata/` subdirectory. More on the actual placement in the table testing case. +Each test should place any fixtures or third party assets next to +the assets, within a `testdata/` subdirectory. More on the actual +placement in the table testing case. -> Your project's `CONTRIBUTING.md` should mention how to run each kind of test and how to refresh golden files. +> Your project's `CONTRIBUTING.md` should mention how to run each kind +> of test and how to refresh golden files. ### Files created by the tests -Any files or directories created by the tests should be in a temporary directory, and running tests should not create any new files in the current project tree. +Any files or directories created by the tests should be in a temporary +directory, and running tests should not create any new files in the +current project tree. ## Table testing -The main approach should be the usage of table testing driven by coverage. Table testing should use a map (or dictionary) of strings to test case objects. +The main approach should be the usage of table testing driven by +coverage. Table testing should use a map (or dictionary) of strings +to test case objects. -* The string should describe what the tests are actually doing (capitalized). It's a sub-test of a given test. -* The test case itself should be a structure type of the tests input and expected outcomes (if not a golden file). +* The string should describe what the tests are actually doing + (capitalized). It's a sub-test of a given test. -> In the rest of the document, we are going to call the main test "family test" and sub-test just "test" or "test case". +* The test case itself should be a structure type of the tests input + and expected outcomes (if not a golden file). + +> In the rest of the document, we are going to call the main test +> "family test" and sub-test just "test" or "test case". The use of table testing allows us to: -1. Easily demonstrate how a component or API is supposed to be, giving a great example of its expectations. It outlines clearly the “input/output” contract for a given API, and additionally allows one to easily see how to build an object and which sequence of calls is expected from the API. -1. Easily extends the coverage by adding a new test case. "What if I give this input to trigger that situation, what if I use this fixture to trigger an error, etc." -1. Quickly parse the various cases and contracts of an API usage. "If I give those inputs, I will get back this." -1. Force factoring out commonalities between tests and perhaps re-design some parts of the behaviour. If the API changes later on, only adapt one test for it, not many. +1. Easily demonstrate how a component or API is supposed to be, giving + a great example of its expectations. It outlines clearly the + "input/output" contract for a given API, and additionally allows one + to easily see how to build an object and which sequence of calls is + expected from the API. + +2. Easily extends the coverage by adding a new test case. "What if + I give this input to trigger that situation, what if I use this + fixture to trigger an error, etc." -Any API level module/package should be tested independently, with a corresponding family test. If triggering some edge cases for this family diverges too far from the main family tests, then additional, separate tests are more appropriate. However, when doing this, always consider the additional cost/impact this might have on future enhancements requiring changes to code behaviour, call order, or the API itself. +3. Quickly parse the various cases and contracts of an API usage. + "If I give those inputs, I will get back this." -One easy way to expand and facilitate adding tests in the future is to use a combination of table testing and golden files. +4. Force factoring out commonalities between tests and perhaps + re-design some parts of the behaviour. If the API changes + later on, only adapt one test for it, not many. + +Any API level module/package should be tested independently, with a +corresponding family test. If triggering some edge cases for this +family diverges too far from the main family tests, then additional, +separate tests are more appropriate. However, when doing this, +always consider the additional cost/impact this might have on future +enhancements requiring changes to code behaviour, call order, or the +API itself. + +One easy way to expand and facilitate adding tests in the future is to +use a combination of table testing and golden files. ### Fixture location -* The fixture assets tree used in a single family test should be placed under `testdata//`. -* If a fixture is specific to only one sub-test, it could even be placed under `testdata//`. This could be loaded automatically, based on the family and sub-test names, by the test code. -* Any shared fixtures between multiple tests can be directly placed under `testdata/`, usually organized with meaningful directory namings. +* The fixture assets tree used in a single family test should be + placed under `testdata//`. + +* If a fixture is specific to only one sub-test, it could even be + placed under `testdata//`. This + could be loaded automatically, based on the family and sub-test + names, by the test code. + +* Any shared fixtures between multiple tests can be directly placed + under `testdata/`, usually organized with meaningful directory + namings. ## Golden testing -Golden files allow easily comparing the expected output to a reference that is checked in within the code-base. This avoids, in particular with table testing, an expanded "want" content. +Golden files allow easily comparing the expected output to a reference +that is checked in within the code-base. This avoids, in particular +with table testing, an expanded "want" content. -This also helps on the maintenance front: there should be an `UPDATE` flag (preferably an environment variable like `_UPDATE_GOLDEN=1`) to automatically update the references. Then, as part of any update with new fields, or simply a new test addition, the maintainer simply does the update automatically - so that the tests would pass - and runs `git diff` to review the changes and ensure they make sense. Finally, these files are added and committed to become the new reference. +This also helps on the maintenance front: there should be +an `UPDATE` flag (preferably an environment variable like +`_UPDATE_GOLDEN=1`) to automatically update the +references. Then, as part of any update with new fields, +or simply a new test addition, the maintainer simply does the +update automatically - so that the tests would pass - and runs +`git diff` to review the changes and ensure they make sense. +Finally, these files are added and committed to become the new +reference. -> Your project's `CONTRIBUTING.md` should mention how to refresh golden files. +> Your project's `CONTRIBUTING.md` should mention how to refresh +> golden files. ### Golden file location -Similarly to fixtures, golden files are located depending on their impact: +Similarly to fixtures, golden files are located depending on their +impact: * `testdata//golden` for simple test name. * `testdata//golden/`. -`golden` or `` is a simple file (if the golden file is an unmarshalled structure, or directory). +`golden` or `` is a simple file (if the golden file is +an unmarshalled structure, or directory). ### Structure of golden files -You can serialize/deserialize the expected object and compare it against the reference. For readability, the serial format can be yaml. +You can serialize/deserialize the expected object and compare it +against the reference. For readability, the serial format can be +yaml. ### Golden tree -When the outcome of an API call is not an object, but one or more configured files on the disk, then the golden content is a tree representing the files and directories created by the method call. +When the outcome of an API call is not an object, but one or more +configured files on the disk, then the golden content is a tree +representing the files and directories created by the method call. -In general, the test will alter the `root` of the system to a temporary directory, then compare it with the reference golden tree, ensuring that each folder is created as expected, and each file has the correct permissions and contents. +In general, the test will alter the `root` of the system to a +temporary directory, then compare it with the reference golden tree, +ensuring that each folder is created as expected, and each file has +the correct permissions and contents. -If an update is required, then, the contents of this temporary directory will directly replace the golden tree path after its purge. This ensures there are no leftovers in the long term. +If an update is required, then, the contents of this temporary +directory will directly replace the golden tree path after its +purge. This ensures there are no leftovers in the long term. -> Empty created directories still need to be checked into git. But git only tracks non-empty directories, so in general, we touch a `.empty` file as a tracker, which should then be ignored during tree comparisons. +> Empty created directories still need to be checked into git. +> But git only tracks non-empty directories, so in general, +> we touch a `.empty` file as a tracker, which should then be +> ignored during tree comparisons. ## Fuzz testing -Fuzz testing should be used when the input of a given function is not trusted. This allows to more easily detect bugs and other misbehaviours of the application, and can reveal vulnerabilities. +Fuzz testing should be used when the input of a given function is +not trusted. This allows to more easily detect bugs and other +misbehaviours of the application, and can reveal vulnerabilities. -Ideally, the framework in use will do coverage-based fuzzing, and then use case simplification. +Ideally, the framework in use will do coverage-based fuzzing, +and then use case simplification. -Only fuzz parts where you feel the input can lead to bad behaviour of your application, in particular if you handle the deserialization yourself. +Only fuzz parts where you feel the input can lead to bad behaviour of +your application, in particular if you handle the deserialisation +yourself. ## Benchmarking -In general, we optimise for maintainability first, then for performance. Pick your battle, and only optimise where it makes sense, after prooving that this part of the code is impacting very negatively the whole project's performance. Basically, bring some data to the table first! +In general, we optimise for maintainability first, then for +performance. Pick your battle, and only optimise where it makes +sense, after proving that this part of the code is impacting very +negatively the whole project's performance. Basically, bring some +data to the table first! -Once the performance issues are fixed, it's time to build a benchmark of isolated tests, and track its behaviour over time. If it regresses significantly, raise it as a failing test. +Once the performance issues are fixed, it's time to build a benchmark +of isolated tests, and track its behaviour over time. If it regresses +significantly, raise it as a failing test.