The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
bibtex
package to Suggests as it has been
orphaned, use it conditionally (#209)tibble::tbl_df
to
tibble::as_tibble
(#206)cr_works()
output:
short-container-title, references-count, is-referenced-by-count,
language, content-domain, and update-to (#208)query.title
field query is no longer supported by
Crossref, removed from package (#198)cr_funders()
, cr_journals()
,
cr_licenses()
, cr_members()
,
cr_prefixes()
, cr_types()
,
cr_works()
gain ability to show a progress bar when using
deep pagination when using works=TRUE
(#186) (#188)cr_works()
returned a tibble in the $data
slot in each case except for when a single DOI was passed to the
doi
parameter. now fixed (#184) thanks @martinjhnhadleycr_works()
,
cr_works_()
, and cr_citation_count()
. See the
new parameter async
(logical) in those functions. For
cr_citation_count()
, it now accepts more than 1 DOI, and
the output has changed from a numeric value to a data.frame (columns:
doi
and count
). With async=TRUE
for cr_works()
you get a list of data.frame’s; while for
cr_works_()
you get a list of JSON’s (#121) (#160)
(#182)vcr
for HTTP request/response
caching (#178) (#179)works
data, now returning
published-print
and published-online
fields
(#181)/works
routes (when
works = TRUE
): isbn
,
reference_visibility
, has_content_domain
, and
has_domain_restriction
(#176) (#177)/works
routes (when
works = TRUE
): $reference
gives the references
cited in the article (these are not the articles citing the target
article, sorry) (#176)cr_abstract()
of handling many DOIs
while allowing for failures without stopping progress (#174) thanks
@zackbatistcr_works()
(#180) thanks @nicholasmfraser for the bug
reportcr_citation_count()
: when given a
bad/invalid/malformed DOI, throw warning and give back an
NA
(#164) thanks for the report @chremancr_cn()
(#168)cr_cn()
when bibentry not valid,
that is not parseable. before the package we use to parse
bibtex
would stop on invalid bibentry data, but now we get
around the invalid bits and give back the bibentry (#147)dois
parameter (#162)
thanks @ms609cr_*
functions. We now
give errors like
404 (client error): /works/blblbl - Resource not found.
,
which includes HTTP status code, major class of error (success,
redirect, client, server), the route requested, and the error message
(#163)cr_works()
in which field queries should
have been possible for title and affiliation, but were not. Fixed now.
(#149)cr_journals()
was not correctly parsing data when more
than 1 ISSN given and works
set to TRUE
, fixed
now (#156)cr_journals()
was not correctly parsing data when no
ISSN found and works
set to TRUE
, fixed now
(#150)cr_journals()
was not correctly handling queries with
multiple ISSN’s and works
set to FALSE
, fixed
now (#151)cr_works()
, and any other
cr_*
function that set works=TRUE
. the
license
slot can have more than 1 result, and we were only
giving the first back. fixed the parsing on this to give back all
license results (#170)/works
route gain a
select
parameter to select certain fields to return
(#146)sort
parameter (#142)flq
parameter) (#143)publisher-name
). You can see the filters with the functions
filter_details
/filter_names
. Beware, some
filters error sometimes with the Crossref API - they may not work, but
they may, let me know at https://github.com/ropensci/rcrossref/issues or let
Crossref know at https://github.com/CrossRef/rest-api-doc/issues (#136)
(#139) (#141)crminer
https://github.com/ropensci-archive/crminer . Functions
that did text mining stuff now defunct, see
?rcrossref-defunct
(#122)https
instead of
http
(#133)xml2::xml_find_one
with
xml2::xml_find_first
(#128) thanks @njahn82httr
with crul
for HTTP requests
(#132)cr_journals
and
cr_works
about what the returned data fields
backfile_dois
and current_dois
really mean
(#105) thanks @SteveVisscr_prefixes
to not fail when no results found
(#130) thanks @globbestaelcr_works
to allow queries like
facet = license:*
to be passed to facet
parameter (was always allowed by Crossref, but we neglected to allow it
- previously only allowed a boolean) (#129)cr_funders
and cr_journals
to give
back facet data along with other data (#134)cr_*
functions to check for a missing
content-type headers and instead of failing, we continue anyway and try
to parse data as sometimes Crossref doesn’t give back a content type
header at all (#127)query
parameter which queries across all
fields (#111)rappdirs
for local storage and caching for
cr_ft_text
(#106)offset
parameter (#126)config=verbose()
call
(#124)cr_search
and cr_search_free
are now
defunct. They were marked deprecated in previous version, and warned of
defunct, and now they are defunct. Similar functionality can be done
with e.g., cr_works()
(#102)crosscite
is now defunct. The functionality of this
function can be achieved with cr_cn()
(#82)cr_fundref
is now defunct. Crossref changed their name
fundref
to funders
, so we’ve changed our
function, see cr_funders()
(#83)sample
maximum value is now 100, was
previously 1000. documentation updated. (#118)has-clinical-trial-number
and
has-abstract
added to the package, see
?filters
for help (#120)?rcrossref
for more. Addin authored by Hao Zhu @haozhu233 (#114)cr_abstract()
that tries to get an
abstract via XML provided by Crossref - NOTE: an abstract is rarely
available though (#116)cr_cn()
where DOIs with no minting agency
found were failing because we were previously stopping when no agency
found. Now, we just assume Crossref and move on from there. (#117)
thanks @dfalster
!cr_r()
when number requested > 100. Actual
fix is in cr_works()
. Max for sample used to be 1000, asked
this on the Crossref API forum, see https://github.com/CrossRef/rest-api-doc/issues/146
(#115)cr_journals()
in internal parsing, was failing
in cases where ISSN
array was of length zerocr_citation_count()
to
remove PLOS reference as the function isn’t only for PLOS works
(#108)dplyr::rbind_all()
to
dplyr::bind_rows()
(#113)httr
and curl
(which httr
depends
on). Will potentially be useful to Crossref to know how many requests
come from this R client (#100)cr_search()
and cr_search_free()
use old
Crossref web services, so are now marked deprecated, and will throw a
deprecation message, but can still be used. They will both be defunct in
v0.6
of this package (#99)XML
replaced with xml2
(#98)httr::content()
calls: all parse to text then parse
content manually. in addition, encoding explicitly set to
UTF-8
on httr::content()
calls (#98)cr_journals()
- fix to parse correctly on
some failed requests (#97) thanks @nkorfcr_fundref()/cr_funders()
- parsing wasn’t
working correctly in all casesSkipped v0.4
to v0.5
because of many
changes - as described below.
cursor
, which
accepts a cursor alphanumeric string or the special *
,
which indicates that you want to initiate deep paging;
cursor_max
, which is not in the Crossref API, but just used
here in this package to indicate where to stop - otherwise, you’d get
all results, even if there was 70 million, for example. A new internal
R6
class used to make cursor requests easy (#77)id_converter()
to get a PMID from a DOI
and vice versa (#49)cr_types()
, along with its low level
equivalent cr_types_()
for when you just want a list or
json back (#92)cr_funders_()
, cr_journals_()
,
cr_licenses_()
, cr_members_()
,
cr_prefixes_()
, cr_types_()
,
cr_works_()
. These functions are a bit faster, and aren’t
subject to parsing errors in the event of a change in the Crossref API.
(#93)filter_names()
and
filter_details()
functions to get information on what
filters are available, the expected values, and what they mean.filter_names()
and
filter_details()
(#73)cr_funders()
alias added to cr_fundref()
(#74)/funders
route,s in cr_funders()
(#79)sample
parameter ignored unless
works=TRUE
(#81)cr_cn()
now checks that the user supplied content-type
is supported for the DOI minting agency associated with the DOI (#88)
(thanks @njahn82).progress
parameter use internally where it
wasn’t applicable.sample
parameter dropped from
cr_licenses()
.cr_works()
parsing changed. We now don’t attempt to
flatten nested arrays, but instead give them back as data.frame’s nested
within the main data.frame. For example, author
often has
many entries, so we return that as a single column, but indexing to that
column gives back a data.frame with a row for each author, and N number
of columns. Hopefully this doesn’t break too much code downstream
:)?rcrossref
) to explain: what you’re actually searching
when you search; deprecated and defunct functions; and explanation of
high vs. low level API.cr_members()
to warn on error instead of stop
during parsing (#68)cr_works()
to output links
data, for full text links (#70)cr_cn()
example that didn’t work
(#80)affiliation
data inside
author
object in Crossref search API returned data
(#84)award
slot in Crossref search
API returned data (#90)crosscite()
deprecated, will be removed in a future
version of this package (#78)cr_fundref()
now has a deprecated message, and will be
removed in the next version (#74)crosscite()
to work with the
Citeproc service (http://crosscite.org/citeproc/) (#60)httr
v1
(#65)cr_agency()
function, back up and fixed now (#63)extract_pdf()
to extract text from
pdfscr_ft_links()
to get links for full text
content of an article (#10)cr_ft_text()
to get links for full text
content of an article. In addition, cr_ft_pdf()
,
cr_ft_plain()
, and cr_ft_xml()
are convenience
functions that will get the format pdf, plain text, or xml,
respectively. You can of course specify format in the
cr_ft_text()
function with the type
parameter
(#10) (#42)data.frame
in
cr_works()
, which caused failure if a non-Crossref DOI
included (#52)pmid2doi()
and doi2pmid()
functions
removed temporarily as the web service is down temporarily, but will be
online again soon from Crossref (#48)cr_citation()
is deprecated (stil useable, but will be
removed in a future version of the package). use cr_cn()
instead. (#34)These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.