diff --git a/ci/README.md b/ci/README.md index abfd2edc..eb4c138d 100644 --- a/ci/README.md +++ b/ci/README.md @@ -1,6 +1,6 @@ # Continuous integration / analysis directory -[![Build Status](https://travis-ci.com/manubot/rootstock.svg?branch=master)](https://travis-ci.com/manubot/rootstock) +[![Build Status](https://travis-ci.org/Benjamin-Lee/deep-rules.svg?branch=master)](https://travis-ci.org/Benjamin-Lee/deep-rules) This repository uses [continuous analysis](https://doi.org/10.1101/056473 "Reproducible Computational Workflows with Continuous Analysis") to create the manuscript and commit it back to GitHub. [`deploy.sh`](deploy.sh) runs on successful `master` branch builds that are not pull requests. diff --git a/ci/deploy.sh b/ci/deploy.sh index 5266b835..066337f0 100644 --- a/ci/deploy.sh +++ b/ci/deploy.sh @@ -49,8 +49,8 @@ This build is based on https://github.com/$TRAVIS_REPO_SLUG/commit/$TRAVIS_COMMIT. This commit was created by the following Travis CI build and job: -https://travis-ci.com/$TRAVIS_REPO_SLUG/builds/$TRAVIS_BUILD_ID -https://travis-ci.com/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID +https://travis-ci.org/$TRAVIS_REPO_SLUG/builds/$TRAVIS_BUILD_ID +https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID [ci skip] diff --git a/content/03.ml-concepts.md b/content/03.ml-concepts.md index 477456dd..280cce1d 100644 --- a/content/03.ml-concepts.md +++ b/content/03.ml-concepts.md @@ -1,4 +1,5 @@ ## Tip 1: Concepts that apply to machine learning also apply to deep learning {#concepts} + Deep learning is a distinct subfield of machine learning, but it is still a subfield. DL has proven to be an extremely powerful paradigm capable of outperforming “traditional” machine learning approaches in certain contexts, but it is not immune to the many limitations inherent to machine learning. Many best practices for machine learning also apply to deep learning. diff --git a/content/07.architecture-and-representation.md b/content/07.architecture-and-representation.md index 10b9e082..edf721e4 100644 --- a/content/07.architecture-and-representation.md +++ b/content/07.architecture-and-representation.md @@ -1,4 +1,5 @@ ## Tip 5: Choose an appropriate data representation and neural network architecture {#architecture} + Unfortunately, choosing how to represent your data and design your architecture is closer to an art than a science. While certain best practices have been established by the research community [@doi:10.1007/978-3-642-35289-8], architecture design choices remain largely problem-specific and are vastly empirical efforts requiring extensive experimentation. Furthermore, as deep learning is a quickly evolving field, many recommendations are often short-lived and frequently replaced by newer insights supported by recent empirical results. diff --git a/content/10.blackbox.md b/content/10.blackbox.md index 2015e98c..365f3cfa 100644 --- a/content/10.blackbox.md +++ b/content/10.blackbox.md @@ -1,4 +1,5 @@ -## Tip 8: Your DL models can be more transparent {#blackbox}  +## Tip 8: Your DL models can be more transparent {#blackbox} + In ML, interpretability refers to the study of the discriminative features used for classification or regression task. ML models can vary in terms of interpretability from a “transparent” to a “black-box” model, the first with a clear description of features importance found, for example, in common random forests implementations [@doi:10.1023/A:1010933404324].  The second for the most widely used DL implementations.   @@ -13,4 +14,4 @@ Model transparency is notably important in the biomedical field. Many authors attribute the lack of pervasiveness of deep learning tools in healthcare because of the inability to understand what these models learn [@doi:10.1109/JBHI.2016.2636665; @doi:10.1038/s41551-018-0315-x].  In conclusion, we encourage beginners of the DL to train in parallel a base model that is easier to interpret.  In case the difference in accuracy is too high to trade-off with the DL model, pre-training AE may help to dissect which are discriminative features.  -Alternatively, algorithms based on Garson's work [@tag:garson-interpreting] can help to investigate the weights of a DL model to better understand it [TODO detail Garson’s algorithm?]. +Alternatively, algorithms based on Garson's work [@tag:garson-interpreting] can help to investigate the weights of a DL model to better understand it [**TODO detail Garson’s algorithm?**]. diff --git a/content/14.acknowledgements.md b/content/14.acknowledgements.md index 397a8773..b44e0f3e 100644 --- a/content/14.acknowledgements.md +++ b/content/14.acknowledgements.md @@ -1,4 +1,4 @@ -## Acknowledgements {#acknowledgements.md} +## Acknowledgements {#acknowledgements} - The authors would like the thank Daniel Himmelstein and the developers of Manubot for creating the software that enabled the collaborative composition of this manuscript. - We would also like to thank [**TODO**: insert the names of the contributors who don't meet the standards for authorship] for their contributions to the discussions that comprised the intial stage of the drafting process. +The authors would like the thank Daniel Himmelstein and the developers of Manubot for creating the software that enabled the collaborative composition of this manuscript. +We would also like to thank [**TODO**: insert the names of the contributors who don't meet the standards for authorship] for their contributions to the discussions that comprised the intial stage of the drafting process.