Bifrost®: Flooding in Earth and Related Environmental Sciences

v. dev (8f744784)

Niemi, Kristian. (2026, 28 april). Bifrost®-analys: Flooding in Earth and Related Environmental Sciences. Karlstads universitet. https://bifrost.kau.se/forskning/miljo/flooding_in_earth_and_related_environmental_sciences.html

1 145
2026*: 4
946
83% of total 2026*: 100%
52%
2026*: 50%
+13.4%
Average annual growth rate: 1995–2025
30%
Among level-classified journals
2026*: 25%
1084
2026*: 4
*Year may be incomplete
About key indicators

Key indicators summarise the report’s central metrics. All values are calculated from the underlying dataset and refer to the full period unless otherwise stated. Percentages (peer-reviewed, Open Access, international collaboration) are calculated as a share of total publications per year.

Percentage change is not shown when the base value is below 10 units, as small base values produce statistically unstable percentages (Hicks et al. (2015), principle 8; cf. CDC rule for n < 16). Absolute values are shown instead.

The following query was used:
hsv:(Earth and Related Environmental Sciences) AND ((flooding) OR (flood) OR (översvämning)) srt2:(1995-2077)
Database: SwePub
Data quality: remarks
Publication activity
Publication types
Number of scientific publications per type over years
83%
Peer-reviewed
2026: 100%
100%
Scientific
2026: 100%
449
Unique journals
2026: 4
30%
Level 2 (Norwegian list)
2026: 25%
249 of 821 classified

Insights
The peer-reviewed share: 86.1 % recent decade (2017–2026), up from 81.5 % previous decade (2007–2016). Long-term trend (1995–2026): increasing. Note that 0% in the first year of the period may reflect incomplete metadata rather than an actual absence of peer review.

1142 publications (100%) scientific, 3 publications (0%) other.

Older years (1995–2018) aggregated for readability. Full timespan available in data export.

Publication types over time
Journals: peer reviewed and other scientific
Missing match in the Channel Register (HK-dir). Common causes: missing ISSN in source data, channels outside the register, conference series, or recently launched journals. Lack of classification does not necessarily mean the journal lacks peer review.
Publications by NPI level

The Norwegian Publication Indicator (NPI), also known as the Norwegian list, classifies publication channels into two levels. Level 2 (top ~20% per field) is considered the most prestigious channels. Level 1 covers other approved channels.

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
NPI level (1 or 2) is retrieved from the Norwegian Channel Register (HK-dir) via ISSN/ISBN matching. Publications without a match are assigned level X.
Limitations
  • The channel register does not cover all scientific publishing. Publications outside the channel list lack an NPI level and are counted as level X.
  • Level-based indicators should be interpreted contextually and not used as the sole quality measure. Hicks et al. (2015)
  • NPI classification is sourced from the Norwegian channel register (HK-dir) and does not cover all scientific publishing. Publications outside the channel list have no NPI level.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
30.3%
Level 2
2026: 25.0%
249
Level 2 (Count)
2026: 1
548
Level 1 (Count)
2026: 3
324
Unclassified
28.3% of total

A high proportion of publications (≥10%) lack NPI classification. Common causes: missing ISSN in source data, channels outside the register (~40,000 journals), conference series, or recently launched journals. See the journal table for unclassified channels.

NPI level by year
Publications without NPI classification
Method and limitations
Data source
SwePub + Kanalregisteret (HK-dir)
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • Volume measures count publications, not pages published or contribution size.
  • Conference papers may be underrepresented in the source database, particularly for older periods and certain disciplines.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
  • NPI classification is sourced from the Norwegian channel register (HK-dir) and does not cover all scientific publishing. Publications outside the channel list have no NPI level.
DORA

DORA mode is enabled for this report. Bifrost evaluates the report against the principles of DORA (2012) and CoARA (2022).

The report contains elements that are not compatible with DORA principle 1: “Do not use journal-based metrics as a surrogate measure of the quality of individual research articles.”

NPI level classification (Level 1/2) is shown in the report. NPI is a Nordic classification system that ranks publication channels by academic standing, i.e. a channel-based ranking that DORA advises against using as a quality surrogate. The classification is presented here as descriptive information about publication patterns, not as a measure of individual article quality.

Researchers

Researchers are listed below, sorted by scientific productivity.

5 152
Unique researchers
2026: 20
5%
Top 10 researchers’ publication share
The 10 most productive (of 5 152 total) researchers’ share of all publications
6.1
Co-authors/pub.
2026: 5.2

Insights
The top 20% most productive researchers account for 29% of publications (Gini 0.11, scale: 0 = even, 1 = fully concentrated). Average number of co-authors: 7.4 recent decade (2017–2026), up from 4.6 (2007–2016).

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • Lists show the most productive researchers by volume. Rankings reflect registered publishing activity, not scientific quality or impact.
  • This section is descriptive. Individual researchers are not evaluated; the measure is a group result at aggregate level.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Collaboration
Co-authorship

Insights
74 research groups of roughly equal size; no single cluster dominates. Each researcher collaborates with an average of 5.1 others (a densely connected network). Clear cluster structure (Modularity 0.98); researchers primarily work within their own group. The network is sparse: only 0.8% of all researcher pairs have a direct collaboration link.

Methodology

Each node represents a researcher and each link a co-authorship. Colors indicate research groups (clusters) identified via modularity analysis. Node size reflects number of publications.

The co-authorship network is built from co-authored publications. Each node represents a researcher, and each edge is weighted by number of joint publications. Edge weights are normalized using association strength Van Eck et al. (2009) before clustering with the Louvain algorithm. Centrality measures: degree (number of collaborators), collaboration intensity (total co-authoring frequency), and bridge score (weighted betweenness using inverse weights) Newman (2004). Network density measures the proportion of realized vs. possible collaborations. Terminology: «Collaborators (avg)» = mean degree; «Clustering» = modularity Blondel et al. (2008).

5.1
Collaborators (avg)
1995–2026
0.98
Clustering
1995–2026
Degree distribution
Summary

Percentages are calculated on pairs where both authors have country data (2523 classified of 2566 total, coverage 98%). Of which 18 pairs where both authors lack institutional affiliation, 0 pairs where the institution could not be mapped to a country, and 25 pairs where one side lacks data.

Individual network statistics

‘Co-authored texts’ indicates the number of texts the author has written together with one or more co-authors.

Network statistics show central nodes in the collaboration network. Degree is the number of direct collaborations, while betweenness shows which authors act as bridges between different groups.

Most common co-authorships

The first co-author listed is the one the author has written with the most times. ‘Number’ indicates the number of co-authored texts with the author. Up to four additional co-authors are listed, in descending order of co-authorship.

Network of co-authors

Below is a visualization of the 74 different groupings in the dataset. The colors indicate different groups.

The network shows 509 of 5152 co-authors: those who share at least 2 publication with another.

Why are not all co-authors shown?

A co-author is only included in the network once they have co-authored at least 2 publication with another. Pairs who meet in only a single publication are therefore excluded. The threshold dampens noise so recurring collaborations stand out more clearly.

Publications with more than 25 authors are excluded when building the network. These are typically meta-studies, systematic reviews, and large consortium articles where the full author list is printed. Letting them in would connect nearly everyone to nearly everyone else.

Because the network is large, weak links have also been dropped via backbone filtering (509 nodes, 1029 edges in the original graph). The groups above are computed on the filtered, sparser network.

The co-authorship network comprises 509 researchers and 1029 collaborations. Due to the size of the dataset, a simplified version highlighting the strongest collaboration patterns (backbone analysis) is shown. Individual connections with few joint publications have been omitted for clarity.

Group membership

Author names to the right; group ID to the left. You can see the size of the groups and the most common keywords of the groups in the tables that follow. A combination of search and sorting can be used to further explore group membership.

Group size
Group keywords

The table is limited to a) groups with more than 3 members; b) groups with at least one keyword in any publication; c) the ten most used keywords per group.

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • A link in the network means two researchers share at least one publication in the selection; link strength indicates the number of co-authored works. Informal collaborations and unpublished projects are not visible.
  • Short time periods or small research groups produce sparse networks. Isolated nodes indicate researchers with few registered collaborations in the selection, not absence of collaboration in general.
  • Pairs sharing fewer than 2 publications are not included in the network (makeCoauthorMinPubs).
  • Publications with more than 25 authors are excluded from the co-authorship analysis (makeMaxAuthorsPerPub).
  • For networks exceeding 200 nodes or 500 edges, adaptive edge reduction is applied to the visualization (disparity filter, Serrano et al. (2009), or quantile threshold depending on size). Network statistics (centrality, cluster membership) are always computed on the full graph.
  • Whole counting: each shared publication contributes weight 1 per pair. To give each publication equal total weight regardless of author count, enable makeFractionalCounting = TRUE (per-publication 1/(N−1) weighting following Perianes-Rodriguez et al. 2016). Perianes-Rodriguez et al. (2016)
  • Association strength: AS(i,j) = w_ij / (k_i × k_j / 2m). The normalization reduces dominance of high-degree nodes. Van Eck et al. (2009)
  • Degree indicates the number of unique collaboration partners (network topology). Strength indicates total co-publication intensity (edge weights). A researcher with high Degree but low Strength has many shallow collaborations; conversely, high Strength with low Degree indicates few but intensive collaborations. Opsahl et al. (2010)
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Bibliometric network visualizations complement, rather than replace, expert judgment. Van Eck et al. (2014)

See also the supervisor and opponent network in the researcher section for an analysis of academic collaboration patterns beyond co-authorship.

Supervisor and opponent network

Unlike the co-authorship analysis, which maps collaboration through joint publications, this section reveals the academic networks that emerge through dissertation supervision and opposition. Supervisors and opponents active at multiple institutions form informal knowledge bridges between organizations — relationships rarely captured by traditional bibliometric measures but which can reveal important patterns in academic knowledge transfer.

Insights
35 researchers have supervised or opposed across institutional boundaries. Strongest connection: Karlstads universitet – Uppsala universitet (Connection strength: 4). Based on 66% of dissertations with identifiable supervisors.

Supervisions  Oppositions
Supervisor and opponent network, figure
Supervisor and opponent network, table

Rangordningen är inte tillförlitlig på grund av ofullständig data. Listan visas i alfabetisk ordning.

Method description

The supervisor/opponent network is separate from the international collaboration map. The map is based on co-authorship between author affiliations, while supervisor/opponent relations are shown in the network below.

The network is based on supervisor and opponent relationships extracted from SwePub records. Connection strength is calculated as (number of supervisions × 2) + (number of oppositions × 1). The weighting (2:1) is a Bifrost convention reflecting that supervision is a longer and deeper collaborative relationship than opposition. The method lacks established bibliometric practice; it was developed specifically for Bifrost.

The supervisor:opponent weighting (2:1) is a Bifrost convention to reflect the supervisor’s greater role in the dissertation process. This is not established bibliometric practice.

Higher education institutions
594
Institutions
35%
Top-3 share

Insights
710 institutions contribute. Uppsala University, Stockholm University and Swedish University for Agricultural Sciences account for 22 % — a broad distribution.

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Full counting of publication appearances
Limitations
  • The count shows co-author affiliation appearances, not unique publications. A publication with three co-authors from the same institution counts three times.
  • Institution names have been harmonised against an internal name list. Unmatched variants are shown separately or excluded.
  • A total of 137 entries in the raw data were excluded from the table: 128 country/city names (geographic entities, not institutions), 5 manually verified non-institutions (known_unmapped), 7 departments, faculties or centres (pattern-based filter). This is why the institution count may differ from the number of unique affiliations in the source data.
International collaboration

Overview of international collaboration based on co-authorship and affiliations in publications.

78
Collaboration countries
13 on average per year
28%
International collaboration

Insights
78 countries represented in collaborations. Sverige, Storbritannien and United States are most common. 25.2 % of publications involve international co-authors — up from 10 % (2007–2016) to 37 % (2017–2026).

Collaboration countries
Distribution per year
Co-authorship by country

Based on co-author affiliation country.

Networks and publications, geographically

Insights:
The network comprises 273 institutions with 1047 collaboration relationships. The strongest collaboration is between University of Reading and Uppsala universitet (29 co-publications). Uppsala universitet has the most collaboration partners (120).

Period overview

The map primarily shows co-authorship between institutions. Supervisor/opponent links are shown as separate network relations and may be fewer, because only records with clear institutional affiliation can be included.

273
Institutions in network
1995–2026
1 047
Collaboration relationships
1995–2026
29
Co-publications: University of Reading – Uppsala universitet
1995–2026
Method and limitations
Data source
SwePub + OpenStreetMap/Nominatim
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • The international share is calculated only from author affiliation country. Conference location, publication country, and other metadata are not counted as international collaboration.
  • Country analysis is based on co-author affiliation country. Full counting means each country in a co-publication is counted once.
  • The network map shows collaboration relationships between institutions based on co-authored publications. Node size reflects the number of collaboration partners, edge width reflects collaboration strength (Salton’s cosine index).
  • Based on author affiliation data. Incomplete affiliation information may affect results.
Subject areas
Subject categories
Natural Sciences (81%)
Dominant category
6
Subject areas (level 1)
0.39 / 1.00
Subject diversity (evenness)

Insights
Natural sciences dominates (82 %). Subject breadth has increased — research has become more diversified (1995–2026, H: 0.57 → 0.76). Moderate interdisciplinarity — research combines related subject areas. Rao-Stirling: 0.547 (where 0 = single discipline, 1 = maximum diversity). Based on 6 HSV main areas (Swedish classification). Rao-Stirling (Stirling, 2007)

Method: diversity indices

Shannon H (evenness index) measures how evenly publications are distributed across subject areas. A value of 1.00 means perfect evenness; lower values indicate dominance by individual areas. Rao-Stirling measures interdisciplinarity by weighing both the distribution and the taxonomic distance between subject areas according to the Swedish classification system. The scale ranges from 0 (all publications in one subject) to 1 (maximum spread across distant subject areas).

Level 1
Proportional view

Proportion of total publications per year (%). Note that a publication may belong to multiple categories.

Category frequency over time
Level 2

There are 30 level 2 categories in the dataset. Showing the 25 most frequent here.

Proportional view

Proportion of total publications per year (%). Note that a publication may belong to multiple categories.

Subject categories level 2 (table)
Level 3

During 1995–2026, 18.4% (211 of 1,145) publications lack subject classification at this level.

Proportional view

Proportion of total publications per year (%). Note that a publication may belong to multiple categories.

Category frequency over time
Method and limitations
Data source
SwePub
Time period
1995–2026
Classification system
HSV/UKÄ (5 nivåer), OECD FoS (3 nivåer)
Counting method
Full counting per category
Limitations
  • Explore which subject categories are represented in the dataset. Note that publications usually have several categories. Therefore, it is the rule, not the exception, that the percentages together constitute more than 100%. If x is 100% and y is 15%, it means that all publications have been categorized as x, and of them, 15% have also been categorized as y
  • A publication classified under multiple subjects is counted in each category. The sum therefore exceeds the publication count — this is correct, not an error.
  • Classification may vary in precision across institutions and periods. Comparisons should be made with caution.
  • The Rao-Stirling index is computed at portfolio level (subject code shares across the entire dataset), not as a mean of per-publication RS. Absolute RS values are not directly comparable to benchmarks based on per-article calculations or other classification systems.
Keywords
HSV subject categories have been filtered from keywords

geovetenskap; miljökemi; skogsskötsel; remote sensing; fiske; skogsteknik; fjärranalys; lantmäteri; vatten i natur och samhälle; exogen geovetenskap; övrig geovetenskap; geoteknik; miljöteknik; lärande; geovetenskap(ersätts med naturgeografi); morfologi; atmosfärs- och hydrosfärsvetenskap; medicin; learning; design (overall design); mathematics

3 217
Unique keywords
2026: 13
climate change (7%)
Top keyword
2026: climate adaptation

Insights
Broad keyword profile — no single term dominates (HHI: 0.0011 — Herfindahl-Hirschman Index, where 0 = perfectly even distribution, 1 = one term dominates entirely). Most common is “climate change” appearing in 7 % of publications, across a total of 3217.

Colors indicate frequency quantiles within this dataset.

Red: Highest frequency (7-5.62%); Blue: High frequency (5.62-4.24%); Green: Medium frequency (4.24-2.86%); Orange: Low frequency (2.86-1.48%); Gray: Lowest frequency (1.48-0.1%)

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • Keywords are a mix of author-assigned and automatically generated terms. Indexing consistency varies across sources and time periods.
  • English keywords often dominate. Publications in Swedish or other languages are therefore often underrepresented in frequency analyses.
  • Keywords occurring in more than 70% of all publications are automatically excluded to prevent generic terms from dominating the analysis.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Keyword Insights
Declining themes
Historical Trends

The following keywords had periods of high activity in the past but have since declined. The analysis shows when they peaked, what drove the interest, and how activity has evolved since.

earth sciences: historical trend (2006–2011)

Burst period: 2006–2011 (moderate burst)

Peak year: 2007 (9 pubs.)

Driving actors during period:

  • Researchers: Beven, Keith J. (5 pubs.), Morad, Sadoon (4 pubs.), Beven, Keith J (3 pubs.)
  • Institutions: Uppsala University (33 pubs.), Stockholm University (16 pubs.), Swedish University for Agricultural Sciences (7 pubs.)

Co-varying keywords: uncertainty, flood risk, sequence stratigraphy

Current status: Declining

floods: historical trend (2016–2021)

Burst period: 2016–2021 (moderate burst)

Peak year: 2020 (8 pubs.)

Driving actors during period:

  • Researchers: Di Baldassarre, Giuliano (3 pubs.), Halldin, Sven (3 pubs.), Mazzoleni, Maurizio (3 pubs.)
  • Institutions: Uppsala University (36 pubs.), Stockholm University (7 pubs.), University of Gothenburg (7 pubs.)

Co-varying keywords: droughts, hydrology, climate change

Current status: Stable

the changing earth: historical trend (2016–2020)

Burst period: 2016–2020 (moderate burst)

Peak year: 2017 (7 pubs.)

Driving actors during period:

  • Researchers: Andersson, Per S. (3 pubs.), Kutscher, Liselott (3 pubs.), Maximov, Trofim (3 pubs.)
  • Institutions: Stockholm University (12 pubs.), Curtin University (7 pubs.), Lund University (5 pubs.)

Co-varying keywords: den föränderliga jorden, lena river, amphibole microchemistry

Current status: Stable

den föränderliga jorden: historical trend (2016–2020)

Burst period: 2016–2020 (moderate burst)

Peak year: 2017 (5 pubs.)

Driving actors during period:

  • Researchers: Andersson, Per S. (2 pubs.), Kutscher, Liselott (2 pubs.), Maximov, Trofim (2 pubs.)
  • Institutions: Curtin University (7 pubs.), Stockholm University (6 pubs.), Lund University (5 pubs.)

Co-varying keywords: the changing earth, lena river, apollo 14

Current status: Stable

climate variability: historical trend (2017–2020)

Burst period: 2017–2020 (moderate burst)

Peak year: 2019 (3 pubs.)

Driving actors during period:

  • Researchers: Charpentier Ljungqvist, Fredrik (2 pubs.), Aguilar, Camilo Andrés Melo (1 pubs.), Avalos, G. (1 pubs.)
  • Institutions: Stockholm University (8 pubs.), University of Reading (3 pubs.), Uppsala University (3 pubs.)

Co-varying keywords: mediterranean, past millennium, stable isotopes

Current status: Stable

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Burst detection uses an automaton model implemented via the bursts package. The method identifies periods of statistically significant increased occurrence of individual keywords. Kleinberg (2003)
Limitations
  • Burst detection requires at least 5–10 years of data for reliable results. Short time series may produce unstable or misleading burst periods.
  • Keywords are a mix of author-assigned and automatically generated terms. Indexing consistency varies across sources and time periods.
  • English keywords often dominate. Publications in Swedish or other languages are therefore often underrepresented in frequency analyses.
  • Keywords occurring in more than 70% of all publications are automatically excluded to prevent generic terms from dominating the analysis.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Keyword co-occurrence

The heatmap shows how often keywords co-occur in the same publications. Association strength Van Eck et al. (2009) normalizes co-occurrence by the product of the individual keyword frequencies. Red asterisks () in the upper-right corner of cells mark statistically significant co-occurrences (p < 0.05).

The heatmap shows co-occurrence strength for all pairwise combinations of the most frequent keywords, including weak relations. It is a complete N×N matrix. The keyword network below complements this view: it shows cluster structure through backbone filtering that hides weak edges to emphasize strong patterns — the two visualizations measure the same thing but illuminate different aspects.
Top co-occurring keyword pairs
Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Co-occurrence matrix strength is computed using association strength: c_ij / (s_i × s_j / 2m), where s is document frequency and m is the sum of pairwise co-occurrences. Van Eck et al. (2009)
Limitations
  • Association strength is normalized following Van Eck & Waltman (2009): AS(i,j) = c_ij / (c_i × c_j / 2m), where c_ij is the document frequency for the pair, c_i and c_j are the document frequencies for the individual keywords, and m is the sum of co-occurrences over unique pairs (upper triangle of the co-occurrence matrix). Van Eck et al. (2009)
  • Co-occurrence is counted per document (whole counting). Fractional counting is not applied at the keyword level — a deliberate choice because the keywords are controlled terms (SwePub) or extracted concepts (OpenAlex), not free-text author names.
  • The heatmap displays the 15 most frequent keywords (ranked by document frequency). Keywords below the minimum frequency threshold are excluded. The count can be adjusted in the report configuration.
  • Statistical significance testing uses the hypergeometric distribution at the pair level. No correction for multiple testing is applied — the analysis is exploratory, not confirmatory.
  • The heatmap remains readable up to top_n ≤ 25. For larger datasets — see also the keyword network.
  • The heatmap color scale is clipped at the 95th percentile of observed association strengths. Association strength with 2m rescaling is mathematically unbounded: rare keyword pairs with low individual document frequencies can produce extreme values that otherwise dominate the scale and render other cells invisible. Actual maximum values are shown in the top-pairs table.
  • Graph-based visualizations degrade for large networks (Van Eck & Waltman, 2014, pp. 288–289). The heatmap is designed for top-N pairs; the network graph uses backbone filtering to handle larger networks. Van Eck et al. (2014)
  • Keywords are a mix of author-assigned and automatically generated terms. Indexing consistency varies across sources and time periods.
  • English keywords often dominate. Publications in Swedish or other languages are therefore often underrepresented in frequency analyses.
  • Keywords occurring in more than 70% of all publications are automatically excluded to prevent generic terms from dominating the analysis.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Network diagram, keywords

Keywords that frequently co-occur in the same publications form thematic clusters. The table below summarizes the clusters; the interactive graph shows the relationships visually.

The network shows cluster structure and network position: backbone filtering retains statistically significant edges and hides weak relations to emphasize strong patterns. The co-occurrence heatmap above complements this view: it shows the full N×N matrix for the most frequent keywords, including weak pairs not visible in the network.

The network diagram shows how keywords relate to each other based on co-occurrence in publications. Larger nodes mean more frequent keywords. Lines show co-occurrence. Colors indicate thematic clusters identified using the Leiden algorithm (Traag et al., 2019).

Cluster overview
Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Co-occurrence matrix strength is computed using association strength: c_ij / (s_i × s_j / 2m), where s is document frequency and m is the sum of pairwise co-occurrences. Van Eck et al. (2009)
Limitations
  • The SDSM filter tests each keyword pair against a stochastic null model that controls for both keyword frequency and the number of keywords per publication. Edges that do not deviate significantly are removed. Neal (2022)
  • Clusters are detected using the Leiden algorithm Traag et al. (2019), which identifies thematic groups where keywords co-occur more strongly within the group than between groups.
  • The network is limited to the most frequent keywords (top N). Rare keywords are excluded, which may hide emerging topics.
  • Backbone filtering is applied adaptively: fewer than 50 nodes no filtering, 50–99 nodes quantile threshold (median), 100–149 nodes disparity filter α=0.20, ≥ 150 nodes α=0.10. The thresholds are engineering heuristics, not methodologically grounded.
  • Graph-based layouts degrade for large networks (Van Eck et al., 2014). Above 200 nodes, visual clarity depends on backbone filtering and node reduction. Statistics (centrality, cluster membership) are always computed on the complete graph.
  • The number of keywords in the network is adapted to the material: roughly 30 % of the unique keywords are included, with a floor of 60 and a ceiling of 250. Of these, roughly 65 % (at least 30, at most 70 nodes) are shown initially to keep the graph readable. The remaining keywords can be added via the slider below the graph. These limits prevent both sparse and overloaded networks.
  • Keywords are a mix of author-assigned and automatically generated terms. Indexing consistency varies across sources and time periods.
  • Keywords in Swedish and English are mixed without separation. Lemmatization is not applied: ‘learning’ and ‘learners’ are treated as separate keywords. This is a deliberate choice for controlled terms, not a deficiency — but it affects the concentration around English-language concepts.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Word frequency

The frequency of individual words in the dataset as a whole. Words have been taken from title, abstract, and keywords. “Frequency” is total uses, including the number of mentions in the same text, while “publications” is the number of unique texts where the word appears.

19 467
Unique words
flood (71.2%)
Top word

Insights
The 5 most common words (by share of publications) are: “flood” (71%), “water” (45%), “flooding” (39%), “climate” (35%), “risk” (30%). These patterns reflect the thematic core of the dataset.

Word frequency table

Notice: The dataset contains 19467 rows. For best performance, only the 8000 with the highest frequency are displayed in the table.

If you want to examine the frequency of some specific words more closely, enter them in the variable ‘to_stem’.

Method and limitations
Data source
SwePub
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • Word frequencies are derived from titles and abstracts in the dataset. Swedish and English stop words are filtered before calculation.
  • The analysis is language-dependent and does not merge synonymous terms across languages. No stemming or lemmatization is applied — word forms are counted separately.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Word Frequency Trends

These trends are indicative and complement the keyword analysis above.

Methodological note: Word frequency analysis is based on individual words extracted from title, abstract, and keywords. Unlike author-selected keywords, individual words can be noisier and more ambiguous — for example, the word ‘system’ may appear in both technical and social science contexts, while the keyword ‘adaptive systems’ is more precise. Stricter thresholds are used (minimum 10 occurrences, correlation > 0.5) and academic stopwords are excluded.

Rising/declining shows trends over time (Spearman correlation). New/disappearing shows lifecycle — when words started or stopped being used.

No statistically significant trends were identified among the most common words.

Impact and accessibility
Open Access
28%
Open Access
2026: 50%
7%
Green OA
1995–2026

The OA analysis is based on 841 publications with DOI matched against OpenAlex (73% of 1145 total). 245 publications lack DOI and are therefore not included in OA statistics.

Open Access category definitions
  • Gold OA: Published in a fully open access journal (typically with an article processing charge).
  • Green OA: Freely available via an open repository (e.g. institutional repository), typically after an embargo period of 6–12 months, even if the journal is not open access.
  • Hybrid: Published as an open article in an otherwise subscription-based journal (typically with an APC).
  • Bronze: Freely readable on the publisher’s website but without a clear open license (may be removed). Not counted in the OA share because it lacks a formal open license (BOAI/Berlin Declaration).
  • Diamond: Published in a journal that is fully open with no author-facing charges (APC). Often funded by institutions or organizations.
  • Closed: Not freely available — requires subscription or purchase.

Insights
The OA share went from 0 % to 50% (+50 percentage points) during 1996–2026. Gold accounted for the largest increase (+28.7 percentage points). Green OA accounts for 7.5 % of all publications — available via open repository after an embargo period (typically 6–12 months); more recent publications may not yet be freely accessible. Diamond OA (no fees for authors or readers) accounts for 3.4 %.

Open Access types over time
Open/closed per year (absolute)

Note:
841 of 900 publications with DOI were matched against OpenAlex and assigned OA status (93.4%). OA status is sourced from OpenAlex (based on Unpaywall). OA status may be retroactively classified — a publication that is freely available today may have been closed at the time of publication. The trend should therefore be interpreted with caution, especially for older publications. Green OA classification is based on the presence of a version in an open repository, regardless of whether any embargo period has expired — the Green OA share may therefore be overestimated for more recent publications.

Method and limitations
Data source
SwePub + OpenAlex (Unpaywall) Piwowar et al. (2018)
Time period
1995–2026
Counting method
Full counting — each publication counted as one unit
Limitations
  • OA status is sourced from OpenAlex (based on Unpaywall) and may differ from the publisher’s current status. Retroactive changes to OA status are not always captured.
  • The Green OA time series shows the current proportion per publication year — not when the article actually became openly available. Retroactive self-archiving (backfilling) means older years may show higher Green OA shares than at the time of publication.
  • OA data is sourced from OpenAlex/Unpaywall. Coverage is incomplete — actual OA share may be higher than reported, especially for older publications and material archived in systems outside Unpaywall.
  • Bronze OA (freely readable without an open license) is excluded from the OA share since Bifrost v0.8.0, in accordance with the BOAI/Berlin Declaration requirement for an open license. Comparisons with reports generated by older versions may show lower OA shares for the same period. Bronze is still shown in charts and tables.
  • Confidence intervals for OA proportions are computed using the Wilson score method, which provides reliable intervals even for small samples. Wilson (1927)
Publications
Theses

The first doctoral thesis in the dataset is from 1995, Forests and Water - Friends or Foes?: Hydrological implications of deforestation and land degradation in semi-arid Tanzania by Sandström, Klas. From then until 2025, a total of 82 theses have been registered. Of these, 69 are doctoral theses and 13 are licentiate theses.

82
Theses
2026: 0
Supervisors
Opponents
Method and limitations
Data source
SwePub
Time period
1995–2025
Counting method
Full counting — each publication counted as one unit
Limitations
  • Thesis data is sourced from the selected data source. Information on type, supervisor, and opponent depends on how the registering institution has entered the data.
  • Coverage may be incomplete: theses not registered in the data source are not visible in the analysis. Historical theses are often underrepresented.
  • Data reflects publishing activity registered in SwePub and may differ from the institution’s internal statistics.
Publications

A complete list of the search results. Initially sorted by year (descending) and author (ascending). Change the order at the column header. Search can be done over all displayed fields.


Method references

Blondel, V. D., Guillaume, J.-L., Lambiotte, R., & Lefebvre, E (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, P10008. https://doi.org/10.1088/1742-5468/2008/10/P10008

Bornmann, L., & Marx, W (2018). Critical rationalism and the search for standard (field-normalized) indicators in bibliometrics. Journal of Informetrics, 12(3), 598–604. https://doi.org/10.1016/j.joi.2018.05.002

CoARA (2022). Agreement on Reforming Research Assessment. https://coara.eu/agreement/the-agreement-full-text/

DORA (2012). San Francisco Declaration on Research Assessment. https://sfdora.org/read/

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. https://doi.org/10.1038/520429a

Kleinberg, J (2003). Bursty and hierarchical structure in streams. Data Mining and Knowledge Discovery, 7(4), 373–397. https://doi.org/10.1023/A:1024940629314

Mann, H. B (1945). Nonparametric tests against trend. Econometrica, 13(3), 245–259. https://doi.org/10.2307/1907187

Neal, Z. P (2022). backbone: An R package to extract network backbones. PLOS ONE, 17(5), e0269137. https://doi.org/10.1371/journal.pone.0269137

Newman, M. E. J (2004). Analysis of weighted networks. Physical Review E, 70(5), 056131. https://doi.org/10.1103/PhysRevE.70.056131

Opsahl, T., Agneessens, F., & Skvoretz, J (2010). Node centrality in weighted networks: Generalizing degree and shortest paths. Social Networks, 32(3), 245–251. https://doi.org/10.1016/j.socnet.2010.03.006

Perianes-Rodriguez, A., Waltman, L., & Van Eck, N. J (2016). Constructing bibliometric networks: A comparison between full and fractional counting. Journal of Informetrics, 10(4), 1178–1195. https://doi.org/10.1016/j.joi.2016.10.006

Piwowar, H., Priem, J., Larivière, V., Alperin, J. P., Matthias, L., Norlander, B., Farley, A., West, J., & Haustein, S (2018). The state of OA: A large-scale analysis of the prevalence and impact of Open Access articles. PeerJ, 6, e4375. https://doi.org/10.7717/peerj.4375

Sen, P. K (1968). Estimates of the regression coefficient based on Kendall’s tau. Journal of the American Statistical Association, 63(324), 1379–1389. https://doi.org/10.2307/2285891

Serrano, M. Á., Boguñá, M., & Vespignani, A (2009). Extracting the multiscale backbone of complex weighted networks. Proceedings of the National Academy of Sciences, 106(16), 6483–6488. https://doi.org/10.1073/pnas.0808904106

Stirling, A (2007). A general framework for analysing diversity in science, technology and society. Journal of The Royal Society Interface, 4(15), 707–719. https://doi.org/10.1098/rsif.2007.0213

Traag, V. A., Waltman, L., & Van Eck, N. J (2019). From Louvain to Leiden: guaranteeing well-connected communities. Scientific Reports, 9, 5233. https://doi.org/10.1038/s41598-019-41695-z

Van Eck, N. J., & Waltman, L (2009). How to normalize cooccurrence data? An analysis of some well-known similarity measures. Journal of the American Society for Information Science and Technology, 60(8), 1635–1651. https://doi.org/10.1002/asi.21075

Van Eck, N. J., & Waltman, L (2014). Visualizing bibliometric networks. In Ding, Y., Rousseau, R., & Wolfram, D. (Ed.), Measuring Scholarly Impact: Methods and Practice (pp. 285–320). Springer. https://doi.org/10.1007/978-3-319-10377-8_13

Wilson, E. B (1927). Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association, 22(158), 209–212.