Open trinker opened 11 years ago
the rmd
and removeDR
are in acc.roxygen2
now.
The examples will have to be reformatted in a better way to return head/minimal pieces of the data as well. It seems that you make a folder in inst called staticdocs that contains the sections you want (with description of each) and optional icons. You can also have a footer and index as seen here:
https://github.com/hadley/ggplot2/tree/master/inst/staticdocs
Which produces:
NOTE: you must have a older named staticdocs in the int directory.
DEMO: path: C:\Users\trinker\GitHub\qdap\inst\staticdocs
Also it's possible to manually alter the document to remove the parts of the readme I don't want as well as to create function and icons that link to commonly shared .rd files. Write a function to do this automatically.
workflow:
Think about categories: 1) read in/output 2) cleaning/parsing 3) Reshaping 4) Descriptives 5) Measures 6) Visualization 7) Coding tools 8) Word lists 9) viewing 10) Package tools 11) Data sets
Related task: https://github.com/trinker/qdap/issues/108
This allows quick set up for easy cut and paste to the static doc set up:
setwd("C:/Users/trinker/GitHub/qdap/R")
x <- dir()
y <- mgsub(".R", "", x)
cat(paste(paste0("\"", y, "\","), collapse="\n"),
file=paste0(dt, "qdap_funs.txt"))
setwd("C:/Users/trinker/GitHub/qdap/data")
x <- dir()
y <- mgsub(".rda", "", x)
cat(paste(paste0("\"", y, "\","), collapse="\n"),
file=paste0(dt, "qdap_dat.txt"))
Classifications:
1) read in/output
"delete",
"dir_map",
"mcsv_r",
"read.transcript"
2) cleaning/parsing
"bracketX",
"clean",
"incomplete.replace",
"multigsub",
"potential_NA",
"qprep",
"replace_abbreviation",
"replace_contraction",
"replace_number",
"replace_symbol",
"rm_row",
"scrubber",
"space_fill",
"spaste",
"stemmer",
"Trim"
3) Reshaping
"adjacency_matrix",
"colSplit",
"colsplit2df",
"gantt",
"gantt_rep",
"key_merge",
"paste2",
"qcombine",
"sentSplit",
"speakerSplit"
4) Descriptives
"distTab",
"multiscale",
"outlier.detect",
"outlier.labeler"
5) Measures
"automated_readability_index",
"dissimilarity",
"diversity",
"formality",
"kullback.leibler",
"polarity"
6) Visualization
"gradient_cloud",
"gantt_plot",
"gantt_wrap",
"plot.character.table",
"plot.diversity",
"plot.formality",
"plot.polarity",
"plot.pos.by",
"plot.question_type",
"plot.termco",
"plot.word_stats",
"qheat",
"rank_freq_mplot",
"trans.cloud",
"trans.venn",
"word.network.plot"
7) Coding tools
"cm_code.blank",
"cm_code.combine",
"cm_code.exclude",
"cm_code.overlap",
"cm_code.transform",
"cm_combine.dummy",
"cm_df.fill",
"cm_df.temp",
"cm_df.transcript",
"cm_df2long",
"cm_distance",
"cm_dummy2long",
"cm_long2dummy",
"cm_range.temp",
"cm_range2long",
"cm_time.temp",
"cm_time2long"
8) Counts
"question_type",
"syllable.sum",
"termco",
"termco.c",
"wfm",
"word.count",
"word_stats"
9) viewing
"htruncdf",
"left.just",
"strWrap"
10) Package tools
"blank2NA"
"capitalizer",
"duplicates",
"convert",
"hash",
"lookup",
"qcv",
"replacer",
"Search",
"text2color",
"url_dl",
"v.outer"
11) Identifying
"end_inc",
"end_mark",
"imperative",
"NAer",
"pos"
11) Word extraction/comparison
"all_words",
"bag.o.words",
"common",
"exclude",
"stopwords",
"strip",
"synonyms",
"word_associate",
"word_diff_list",
"word_list"
) Printing
"print.adjacency_matrix",
"print.adjacency_matrix",
print.cm_distance",
"print.colsplit2df",
"print.dissimilarity",
"print.diversity",
"print.formality",
"print.kullback.leibler",
"print.polarity",
"print.pos",
"print.pos.by",
"print.question_type",
"print.termco",
"print.v.outer",
"print.word_associate",
"print.word_list",
"print.word_stats"
12) Data sets
"abbreviations",
"action.verbs",
"adverb",
"BuckleySaltonSWL",
"contractions",
"DATA",
"DATA2",
"DICTIONARY",
"emoticon",
"env.syl",
"env.syn",
"increase.amplification.words",
"interjections",
"labMT",
"mraja1",
"mraja1spl",
"negation.words",
"negative.words",
"OnixTxtRetToolkitSWL1",
"positive.words",
"preposition",
"raj.act.1",
"raj.act.2",
"raj.act.3",
"raj.act.4",
"raj.act.5",
"raj.demographics",
"raj",
"rajPOS",
"rajSPLIT",
"SYNONYM",
"Top100Words",
"Top200Words",
"Top25Words",
Categorization has been completed: http://trinker.github.com/qdap/
Now:
I started using:
library(highlight)
library(staticdocs)
build_package(package="C:/Users/trinker/GitHub/qdap",
base_path="C:/Users/trinker/Desktop/qdap/", examples = FALSE)
path <- paste0("C:/Users/trinker/Desktop/qdap/", "index.html")
expand_statdoc(path, , qcv(syn, mgsub, adjmat, wc, char.table))
All that's left is the examples issue:
#packages
library(highlight); library(qdap); library(staticdocs); library(acc.roxygen2)
#STEP 1: create static doc
#right now examples are FALSE in the future this will be true
#in the future qdap2 will be the go to source
build_package(package="C:/Users/trinker/GitHub/qdap",
base_path="C:/Users/trinker/Desktop/qdap/", examples = FALSE)
#STEP 2: reshape index
path <- "C:/Users/trinker/Desktop/qdap"
path2 <- paste0(path, "/index.html")
rdme <- "C:/Users/trinker/GitHub/qdap/inst/extra_statdoc/readme.R"
extras <- qcv(right.just, coleman_liau, flesch_kincaid, fry,
linsear_write, SMOG, syn, mgsub, adjmat, wc, char.table, wfdf)
expand_statdoc(path2, to.icon = extras, readme = rdme)
#STEP 3: move to trinker.guthub
file <- "C:/Users/trinker/GitHub/trinker.github.com/"
delete(paste0(file, "qdap"))
file.copy(path, file, TRUE, TRUE)
delete(path)
This is more of a focus after qdap has been uploaded to CRAN.
For other searchers currently (1-23-12) the
highlights
package has been archived, whichstaticdocs
depends on. To gethighlights
andstaticdocs
:I'm using the following work flow:
1) Create a separate qdap2 that removes the
\dontrun
s2) Use the function here to remove the
\dontruns
3) Run static docs NOTES: a. highlight has to be loaded for the cute formatting b. I've needed to run this in a vanilla session (not sure why)
4) Upload
=============================================================
Also give topics structure. Not sure how to do this.but this may be helpful:
https://github.com/hadley/ggplot2/blob/master/inst/staticdocs/index.r
I think this function does it.