Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
owc-software-python owc-software-python
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Operations
    • Operations
    • Incidents
    • Environments
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Members
    • Members
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • bodc
  • owc-software-pythonowc-software-python
  • Merge requests
  • !14

Merged
Created Nov 21, 2019 by edsmall@edsmallContributor

ARGODEV-160: Convert signal

  • Overview 0
  • Commits 12
  • Pipelines 5
  • Changes 4

Jira Issue

ARGODEV-160 (https://jira.ceh.ac.uk/browse/ARGODEV-160)

Python Implementation

The Matlab version used a slightly different equation involving the standard deviation, for some reason. Using the equation in the paper, outlined in the Matlab Implementation section, gives the same result as the Matlab version, and so I have decided to use the version in the paper is it is less heavy handed.

I have also added an exception if the data set that is used as the input has no quantifiable data in it (ie it is all 0s or NaNs)

Testing

5 tests for this function:

  • Test that the return type is a float
  • Test that we get an exception if we have no valid salinities
  • Test that empty values are ignored in the calculation
  • Test that an input of a and -1*a give the same answer
  • Test that we receive the expected result if given certain inputs

Old Matlab Implementation

Estimates the signal variance at a certain level by taking in a data set d of salinities by using the following equation:

(sum({di - D}^2))/N

where:

  • di is a data point in d
  • D is the mean of all the data points in d
  • N is the number of data points in d
Edited Nov 21, 2019 by edsmall
Assignee
Assign to
Reviewer
Request review from
None
Milestone
None
Assign milestone
Time tracking
Source branch: edsmall/signal