High Level APIs
This group of libraries provide higher level functionality that isn’t hardware related or provides a richer set of functionality above the basic hardware interfaces
pico_async_context
Data Structures
-
struct async_work_on_timeout
-
A "timeout" instance used by an async_context. More...
-
struct async_when_pending_worker
-
A "worker" instance used by an async_context. More...
-
struct async_context_type
-
Implementation of an async_context type, providing methods common to that type. More...
-
struct async_context
-
Base structure type of all async_contexts. For details about its use, see pico_async_context. More...
Typedefs
-
typedef struct async_work_on_timeout async_at_time_worker_t
-
A "timeout" instance used by an async_context.
-
typedef struct async_when_pending_worker async_when_pending_worker_t
-
A "worker" instance used by an async_context.
-
typedef struct async_context_type async_context_type_t
-
Implementation of an async_context type, providing methods common to that type.
Functions
-
static void async_context_acquire_lock_blocking (async_context_t *context)
-
Acquire the async_context lock.
-
static void async_context_release_lock (async_context_t *context)
-
Release the async_context lock.
-
static void async_context_lock_check (async_context_t *context)
-
Assert if the caller does not own the lock for the async_context.
-
static uint32_t async_context_execute_sync (async_context_t *context, uint32_t(*func)(void *param), void *param)
-
Execute work synchronously on the core the async_context belongs to.
-
static bool async_context_add_at_time_worker (async_context_t *context, async_at_time_worker_t *worker)
-
Add an "at time" worker to a context.
-
static bool async_context_add_at_time_worker_at (async_context_t *context, async_at_time_worker_t *worker, absolute_time_t at)
-
Add an "at time" worker to a context.
-
static bool async_context_add_at_time_worker_in_ms (async_context_t *context, async_at_time_worker_t *worker, uint32_t ms)
-
Add an "at time" worker to a context.
-
static bool async_context_remove_at_time_worker (async_context_t *context, async_at_time_worker_t *worker)
-
Remove an "at time" worker from a context.
-
static bool async_context_add_when_pending_worker (async_context_t *context, async_when_pending_worker_t *worker)
-
Add a "when pending" worker to a context.
-
static bool async_context_remove_when_pending_worker (async_context_t *context, async_when_pending_worker_t *worker)
-
Remove a "when pending" worker from a context.
-
static void async_context_set_work_pending (async_context_t *context, async_when_pending_worker_t *worker)
-
Mark a "when pending" worker as having work pending.
-
static void async_context_poll (async_context_t *context)
-
Perform any pending work for polling style async_context.
-
static void async_context_wait_until (async_context_t *context, absolute_time_t until)
-
sleep until the specified time in an async_context callback safe way
-
static void async_context_wait_for_work_until (async_context_t *context, absolute_time_t until)
-
Block until work needs to be done or the specified time has been reached.
-
static void async_context_wait_for_work_ms (async_context_t *context, uint32_t ms)
-
Block until work needs to be done or the specified number of milliseconds have passed.
-
static uint async_context_core_num (const async_context_t *context)
-
Return the processor core this async_context belongs to.
-
static void async_context_deinit (async_context_t *context)
-
End async_context processing, and free any resources.
Detailed Description
An async_context provides a logically single-threaded context for performing work, and responding to asynchronous events. Thus an async_context instance is suitable for servicing third-party libraries that are not re-entrant.
The "context" in async_context refers to the fact that when calling workers or timeouts within the async_context various pre-conditions hold:
That there is a single logical thread of execution; i.e. that the context does not call any worker functions concurrently.
That the context always calls workers from the same processor core, as most uses of async_context rely on interaction with IRQs which are themselves core-specific.
THe async_context provides two mechanisms for asynchronous work:
when_pending workers, which are processed whenever they have work pending. See async_context_add_when_pending_worker, async_context_remove_when_pending_worker, and async_context_set_work_pending, the latter of which can be used from an interrupt handler to signal that servicing work is required to be performed by the worker from the regular async_context.
at_time workers, that are executed after at a specific time.
Note: "when pending" workers with work pending are executed before "at time" workers.
The async_context provides locking mechanisms, see async_context_acquire_lock_blocking, async_context_release_lock and async_context_check_lock which can be used by external code to ensure execution of external code does not happen concurrently with worker code. Locked code runs on the calling core, however async_context_execute_sync is provided to synchronously run a function from the core of the async_context.
The SDK ships with the following default async_contexts:
async_context_poll - this context is not thread-safe, and the user is responsible for calling async_context_poll periodically, and can use async_context_wait_for_work_until() to sleep between calls until work is needed if the user has nothing else to do.
async_context_threadsafe_background - in order to work in the background, a low priority IRQ is used to handle callbacks. Code is usually invoked from this IRQ context, but may be invoked after any other code that uses the async context in another (non-IRQ) context on the same core. Calling async_context_poll is not required, and is a no-op. This context implements async_context locking and is thus safe to call from either core, according to the specific notes on each API.
async_context_freertos - Work is performed from a separate "async_context" task, however once again, code may also be invoked after a direct use of the async_context on the same core that the async_context belongs to. Calling async_context_poll is not required, and is a no-op. This context implements async_context locking and is thus safe to call from any task, and from either core, according to the specific notes on each API.
Each async_context provides bespoke methods of instantiation which are provided in the corresponding headers (e.g. async_context_poll.h, async_context_threadsafe_background.h, asycn_context_freertos.h). async_contexts are de-initialized by the common async_context_deint() method.
Multiple async_context instances can be used by a single application, and they will operate independently.
Typedef Documentation
◆ async_at_time_worker_t
typedef struct async_work_on_timeout async_at_time_worker_t |
A "timeout" instance used by an async_context.
A "timeout" represents some future action that must be taken at a specific time. It's methods are called from the async_context under lock at the given time
See alsoasync_context_add_worker_at async_context_add_worker_in_ms
◆ async_when_pending_worker_t
typedef struct async_when_pending_worker async_when_pending_worker_t |
A "worker" instance used by an async_context.
A "worker" represents some external entity that must do work in response to some external stimulus (usually an IRQ). It's methods are called from the async_context under lock at the given time
See alsoasync_context_add_worker_at async_context_add_worker_in_ms
Function Documentation
◆ async_context_acquire_lock_blocking()
|
inlinestatic |
Acquire the async_context lock.
The owner of the async_context lock is the logic owner of the async_context and other work related to this async_context will not happen concurrently.
This method may be called in a nested fashion by the the lock owner.
Note |
the async_context lock is nestable by the same caller, so an internal count is maintained for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
See alsoasync_context_release_lock
◆ async_context_add_at_time_worker()
|
inlinestatic |
Add an "at time" worker to a context.
An "at time" worker will run at or after a specific point in time, and is automatically when (just before) it runs.
The time to fire is specified in the next_time field of the worker.
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
worker | the "at time" worker to add |
Returns
true if the worker was added, false if the worker was already present.
◆ async_context_add_at_time_worker_at()
|
inlinestatic |
Add an "at time" worker to a context.
An "at time" worker will run at or after a specific point in time, and is automatically when (just before) it runs.
The time to fire is specified by the at parameter.
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
worker | the "at time" worker to add |
at | the time to fire at |
Returns
true if the worker was added, false if the worker was already present.
◆ async_context_add_at_time_worker_in_ms()
|
inlinestatic |
Add an "at time" worker to a context.
An "at time" worker will run at or after a specific point in time, and is automatically when (just before) it runs.
The time to fire is specified by a delay via the ms parameter
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
worker | the "at time" worker to add |
ms | the number of milliseconds from now to fire after |
Returns
true if the worker was added, false if the worker was already present.
◆ async_context_add_when_pending_worker()
|
inlinestatic |
Add a "when pending" worker to a context.
An "when pending" worker will run when it is pending (can be set via async_context_set_work_pending), and is NOT automatically removed when it runs.
The time to fire is specified by a delay via the ms parameter
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
worker | the "when pending" worker to add |
Returns
true if the worker was added, false if the worker was already present.
◆ async_context_core_num()
|
inlinestatic |
Return the processor core this async_context belongs to.
Parameters
context | the async_context |
Returns
the physical core number
◆ async_context_deinit()
|
inlinestatic |
End async_context processing, and free any resources.
Note the user should clean up any resources associated with workers in the async_context themselves.
Asynchronous (non-polled) async_contexts guarantee that no callback is being called once this method returns.
Parameters
context | the async_context |
Returns
the physical core number
◆ async_context_execute_sync()
|
inlinestatic |
Execute work synchronously on the core the async_context belongs to.
This method is intended for code external to the async_context (e.g. another thread/task) to execute a function with the same guarantees (single core, logical thread of execution) that async_context workers are called with.
Note |
you should NOT call this method while holding the async_context's lock |
Parameters
context | the async_context |
func | the function to call |
parm | the paramter to pass to the function |
Returns
the return value from func
◆ async_context_lock_check()
|
inlinestatic |
Assert if the caller does not own the lock for the async_context.
Note |
this method is thread-safe |
Parameters
context | the async_context |
◆ async_context_poll()
|
inlinestatic |
Perform any pending work for polling style async_context.
For a polled async_context (e.g. async_context_poll) the user is responsible for calling this method periodically to perform any required work.
This method may immediately perform outstanding work on other context types, but is not required to.
Parameters
context | the async_context |
◆ async_context_release_lock()
|
inlinestatic |
Release the async_context lock.
Note |
the async_context lock may be called in a nested fashion, so an internal count is maintained. On the outermost release, When the outermost lock is released, a check is made for work which might have been skipped while the lock was held, and any such work may be performed during this call IF the call is made from the same core that the async_context belongs to. for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
◆ async_context_remove_at_time_worker()
|
inlinestatic |
Remove an "at time" worker from a context.
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
worker | the "at time" worker to remove |
Returns
true if the worker was removed, false if the instance not present.
◆ async_context_remove_when_pending_worker()
|
inlinestatic |
Remove a "when pending" worker from a context.
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
worker | the "when pending" worker to remove |
Returns
true if the worker was removed, false if the instance not present.
◆ async_context_set_work_pending()
|
inlinestatic |
Mark a "when pending" worker as having work pending.
The worker will be run from the async_context at a later time.
Note |
this method may be called from any context including IRQs |
Parameters
context | the async_context |
worker | the "when pending" worker to mark as pending. |
◆ async_context_wait_for_work_ms()
|
inlinestatic |
Block until work needs to be done or the specified number of milliseconds have passed.
Note |
this method should not be called from a worker callback |
Parameters
context | the async_context |
ms | the number of milliseconds to return after if no work is required |
◆ async_context_wait_for_work_until()
|
inlinestatic |
Block until work needs to be done or the specified time has been reached.
Note |
this method should not be called from a worker callback |
Parameters
context | the async_context |
until | the time to return at if no work is required |
◆ async_context_wait_until()
|
inlinestatic |
sleep until the specified time in an async_context callback safe way
Note |
for async_contexts that provide locking (not async_context_poll), this method is threadsafe. and may be called from within any worker method called by the async_context or from any other non-IRQ context. |
Parameters
context | the async_context |
until | the time to sleep until |
async_context_freertos
Functions
-
bool async_context_freertos_init (async_context_freertos_t *self, async_context_freertos_config_t *config)
-
Initialize an async_context_freertos instance using the specified configuration.
-
static async_context_freertos_config_t async_context_freertos_default_config (void)
-
Return a copy of the default configuration object used by async_context_freertos_init_with_defaults()
-
static bool async_context_freertos_init_with_defaults (async_context_freertos_t *self)
-
Initialize an async_context_freertos instance with default values.
Detailed Description
async_context_freertos provides an implementation of async_context that handles asynchronous work in a separate FreeRTOS task.
Function Documentation
◆ async_context_freertos_default_config()
|
inlinestatic |
Return a copy of the default configuration object used by async_context_freertos_init_with_defaults()
The caller can then modify just the settings it cares about, and call async_context_threasafe_background_init()
Returns
the default configuration object
◆ async_context_freertos_init()
bool async_context_freertos_init | ( | async_context_freertos_t * | self, |
async_context_freertos_config_t * | config | ||
) |
Initialize an async_context_freertos instance using the specified configuration.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self | a pointer to async_context_freertos structure to initialize |
config | the configuration object specifying characteristics for the async_context |
Returns
true if initialization is successful, false otherwise
◆ async_context_freertos_init_with_defaults()
|
inlinestatic |
Initialize an async_context_freertos instance with default values.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self | a pointer to async_context_freertos structure to initialize |
Returns
true if initialization is successful, false otherwise
async_context_poll
Functions
-
bool async_context_poll_init_with_defaults (async_context_poll_t *self)
-
Initialize an async_context_poll instance with default values.
Detailed Description
async_context_poll provides an implementation of async_context that is intended for use with a simple polling loop on one core. It is not thread safe.
The async_context_poll method must be called periodically to handle asynchronous work that may now be pending. async_context_wait_for_work_until() may be used to block a polling loop until there is work to do, and prevent tight spinning.
Function Documentation
◆ async_context_poll_init_with_defaults()
bool async_context_poll_init_with_defaults | ( | async_context_poll_t * | self | ) |
Initialize an async_context_poll instance with default values.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self | a pointer to async_context_poll structure to initialize |
Returns
true if initialization is successful, false otherwise
async_context_threadsafe_background
Functions
-
bool async_context_threadsafe_background_init (async_context_threadsafe_background_t *self, async_context_threadsafe_background_config_t *config)
-
Initialize an async_context_threadsafe_background instance using the specified configuration.
-
async_context_threadsafe_background_config_t async_context_threadsafe_background_default_config (void)
-
Return a copy of the default configuration object used by async_context_threadsafe_background_init_with_defaults()
-
static bool async_context_threadsafe_background_init_with_defaults (async_context_threadsafe_background_t *self)
-
Initialize an async_context_threadsafe_background instance with default values.
Detailed Description
async_context_threadsafe_background provides an implementation of async_context that handles asynchronous work in a low priority IRQ, and there is no need for the user to poll for work.
Note |
The workers used with this async_context MUST be safe to call from an IRQ. |
Function Documentation
◆ async_context_threadsafe_background_default_config()
async_context_threadsafe_background_config_t async_context_threadsafe_background_default_config | ( | void | ) |
Return a copy of the default configuration object used by async_context_threadsafe_background_init_with_defaults()
The caller can then modify just the settings it cares about, and call async_context_threasafe_background_init()
Returns
the default configuration object
◆ async_context_threadsafe_background_init()
bool async_context_threadsafe_background_init | ( | async_context_threadsafe_background_t * | self, |
async_context_threadsafe_background_config_t * | config | ||
) |
Initialize an async_context_threadsafe_background instance using the specified configuration.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self | a pointer to async_context_threadsafe_background structure to initialize |
config | the configuration object specifying characteristics for the async_context |
Returns
true if initialization is successful, false otherwise
◆ async_context_threadsafe_background_init_with_defaults()
|
inlinestatic |
Initialize an async_context_threadsafe_background instance with default values.
If this method succeeds (returns true), then the async_context is available for use and can be de-initialized by calling async_context_deinit().
Parameters
self | a pointer to async_context_threadsafe_background structure to initialize |
Returns
true if initialization is successful, false otherwise
pico_multicore
Functions
-
void multicore_reset_core1 (void)
-
Reset core 1.
-
void multicore_launch_core1 (void(*entry)(void))
-
Run code on core 1.
-
void multicore_launch_core1_with_stack (void(*entry)(void), uint32_t *stack_bottom, size_t stack_size_bytes)
-
Launch code on core 1 with stack.
-
void multicore_launch_core1_raw (void(*entry)(void), uint32_t *sp, uint32_t vector_table)
-
Launch code on core 1 with no stack protection.
Example
Function Documentation
◆ multicore_launch_core1()
void multicore_launch_core1 | ( | void(*)(void) | entry | ) |
Run code on core 1.
Wake up (a previously reset) core 1 and enter the given function on core 1 using the default core 1 stack (below core 0 stack).
core 1 must previously have been reset either as a result of a system reset or by calling multicore_reset_core1
core 1 will use the same vector table as core 0
Parameters
entry | Function entry point |
See alsomulticore_reset_core1
◆ multicore_launch_core1_raw()
void multicore_launch_core1_raw | ( | void(*)(void) | entry, |
uint32_t * | sp, | ||
uint32_t | vector_table | ||
) |
Launch code on core 1 with no stack protection.
Wake up (a previously reset) core 1 and start it executing with a specific entry point, stack pointer and vector table.
This is a low level function that does not provide a stack guard even if USE_STACK_GUARDS is defined
core 1 must previously have been reset either as a result of a system reset or by calling multicore_reset_core1
Parameters
entry | Function entry point |
sp | Pointer to the top of the core 1 stack |
vector_table | address of the vector table to use for core 1 |
See alsomulticore_reset_core1
◆ multicore_launch_core1_with_stack()
void multicore_launch_core1_with_stack | ( | void(*)(void) | entry, |
uint32_t * | stack_bottom, | ||
size_t | stack_size_bytes | ||
) |
Launch code on core 1 with stack.
Wake up (a previously reset) core 1 and enter the given function on core 1 using the passed stack for core 1
core 1 must previously have been reset either as a result of a system reset or by calling multicore_reset_core1
core 1 will use the same vector table as core 0
Parameters
entry | Function entry point |
stack_bottom | The bottom (lowest address) of the stack |
stack_size_bytes | The size of the stack in bytes (must be a multiple of 4) |
See alsomulticore_reset_core1
◆ multicore_reset_core1()
void multicore_reset_core1 | ( | void | ) |
Reset core 1.
This function can be used to reset core 1 into its initial state (ready for launching code against via multicore_launch_core1 and similar methods)
Note |
this function should only be called from core 0 |
fifo
Functions for the inter-core FIFOs. More...
Functions
-
static bool multicore_fifo_rvalid (void)
-
Check the read FIFO to see if there is data available (sent by the other core)
-
static bool multicore_fifo_wready (void)
-
Check the write FIFO to see if it has space for more data.
-
void multicore_fifo_push_blocking (uint32_t data)
-
Push data on to the write FIFO (data to the other core).
-
bool multicore_fifo_push_timeout_us (uint32_t data, uint64_t timeout_us)
-
Push data on to the write FIFO (data to the other core) with timeout.
-
uint32_t multicore_fifo_pop_blocking (void)
-
Pop data from the read FIFO (data from the other core).
-
bool multicore_fifo_pop_timeout_us (uint64_t timeout_us, uint32_t *out)
-
Pop data from the read FIFO (data from the other core) with timeout.
-
static void multicore_fifo_drain (void)
-
Discard any data in the read FIFO.
-
static void multicore_fifo_clear_irq (void)
-
Clear FIFO interrupt.
-
static uint32_t multicore_fifo_get_status (void)
-
Get FIFO statuses.
Detailed Description
Functions for the inter-core FIFOs.
The RP2040 contains two FIFOs for passing data, messages or ordered events between the two cores. Each FIFO is 32 bits wide, and 8 entries deep. One of the FIFOs can only be written by core 0, and read by core 1. The other can only be written by core 1, and read by core 0.
Note |
The inter-core FIFOs are a very precious resource and are frequently used for SDK functionality (e.g. during core 1 launch or by the lockout functions). Additionally they are often required for the exclusive use of an RTOS (e.g. FreeRTOS SMP). For these reasons it is suggested that you do not use the FIFO for your own purposes unless none of the above concerns apply; the majority of cases for transferring data between cores can be eqaully well handled by using a queue |
Function Documentation
◆ multicore_fifo_clear_irq()
|
inlinestatic |
Clear FIFO interrupt.
Note that this only clears an interrupt that was caused by the ROE or WOF flags. To clear the VLD flag you need to use one of the 'pop' or 'drain' functions.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
See alsomulticore_fifo_get_status
◆ multicore_fifo_drain()
|
inlinestatic |
Discard any data in the read FIFO.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
◆ multicore_fifo_get_status()
|
inlinestatic |
Get FIFO statuses.
Returns
The statuses as a bitfield
Bit | Description |
---|---|
3 | Sticky flag indicating the RX FIFO was read when empty (ROE). This read was ignored by the FIFO. |
2 | Sticky flag indicating the TX FIFO was written when full (WOF). This write was ignored by the FIFO. |
1 | Value is 1 if this core’s TX FIFO is not full (i.e. if FIFO_WR is ready for more data) |
0 | Value is 1 if this core’s RX FIFO is not empty (i.e. if FIFO_RD is valid) |
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
◆ multicore_fifo_pop_blocking()
uint32_t multicore_fifo_pop_blocking | ( | void | ) |
Pop data from the read FIFO (data from the other core).
This function will block until there is data ready to be read Use multicore_fifo_rvalid() to check if data is ready to be read if you don't want to block.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
32 bit data from the read FIFO.
◆ multicore_fifo_pop_timeout_us()
bool multicore_fifo_pop_timeout_us | ( | uint64_t | timeout_us, |
uint32_t * | out | ||
) |
Pop data from the read FIFO (data from the other core) with timeout.
This function will block until there is data ready to be read or the timeout is reached
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Parameters
timeout_us | the timeout in microseconds |
out | the location to store the popped data if available |
Returns
true if the data was popped and a value copied into out
, false if the timeout occurred before data could be popped
◆ multicore_fifo_push_blocking()
void multicore_fifo_push_blocking | ( | uint32_t | data | ) |
Push data on to the write FIFO (data to the other core).
This function will block until there is space for the data to be sent. Use multicore_fifo_wready() to check if it is possible to write to the FIFO if you don't want to block.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Parameters
data | A 32 bit value to push on to the FIFO |
◆ multicore_fifo_push_timeout_us()
bool multicore_fifo_push_timeout_us | ( | uint32_t | data, |
uint64_t | timeout_us | ||
) |
Push data on to the write FIFO (data to the other core) with timeout.
This function will block until there is space for the data to be sent or the timeout is reached
Parameters
data | A 32 bit value to push on to the FIFO |
timeout_us | the timeout in microseconds |
Returns
true if the data was pushed, false if the timeout occurred before data could be pushed
◆ multicore_fifo_rvalid()
|
inlinestatic |
Check the read FIFO to see if there is data available (sent by the other core)
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
true if the FIFO has data in it, false otherwise
◆ multicore_fifo_wready()
|
inlinestatic |
Check the write FIFO to see if it has space for more data.
See the note in the fifo section for considerations regarding use of the inter-core FIFOs
Returns
true if the FIFO has room for more data, false otherwise
lockout
Functions to enable one core to force the other core to pause execution in a known state. More...
Functions
-
void multicore_lockout_victim_init (void)
-
Initialize the current core such that it can be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
-
void multicore_lockout_start_blocking (void)
-
Request the other core to pause in a known state and wait for it to do so.
-
bool multicore_lockout_start_timeout_us (uint64_t timeout_us)
-
Request the other core to pause in a known state and wait up to a time limit for it to do so.
-
void multicore_lockout_end_blocking (void)
-
Release the other core from a locked out state amd wait for it to acknowledge.
-
bool multicore_lockout_end_timeout_us (uint64_t timeout_us)
-
Release the other core from a locked out state amd wait up to a time limit for it to acknowledge.
Detailed Description
Functions to enable one core to force the other core to pause execution in a known state.
Sometimes it is useful to enter a critical section on both cores at once. On a single core system a critical section can trivially be entered by disabling interrupts, however on a multi-core system that is not sufficient, and unless the other core is polling in some way, then it will need to be interrupted in order to cooperatively enter a blocked state.
These "lockout" functions use the inter core FIFOs to cause an interrupt on one core from the other, and manage waiting for the other core to enter the "locked out" state.
The usage is that the "victim" core ... i.e the core that can be "locked out" by the other core calls multicore_lockout_victim_init to hook the FIFO interrupt. Note that either or both cores may do this.
Note |
When "locked out" the victim core is paused (it is actually executing a tight loop with code in RAM) and has interrupts disabled. This makes the lockout functions suitable for use by code that wants to write to flash (at which point no code may be executing from flash) |
The core which wishes to lockout the other core calls multicore_lockout_start_blocking or multicore_lockout_start_timeout_us to interrupt the other "victim" core and wait for it to be in a "locked out" state. Once the lockout is no longer needed it calls multicore_lockout_end_blocking or multicore_lockout_end_timeout_us to release the lockout and wait for confirmation.
Note |
Because multicore lockout uses the intercore FIFOs, the FIFOs cannot be used for any other purpose |
Function Documentation
◆ multicore_lockout_end_blocking()
void multicore_lockout_end_blocking | ( | void | ) |
Release the other core from a locked out state amd wait for it to acknowledge.
Note |
The other core must previously have been "locked out" by calling a multicore_lockout_start_ function from this core |
◆ multicore_lockout_end_timeout_us()
bool multicore_lockout_end_timeout_us | ( | uint64_t | timeout_us | ) |
Release the other core from a locked out state amd wait up to a time limit for it to acknowledge.
The other core must previously have been "locked out" by calling a multicore_lockout_start_
function from this core
Note |
be very careful using small timeout values, as a timeout here will leave the "lockout" functionality in a bad state. It is probably preferable to use multicore_lockout_end_blocking anyway as if you have already waited for the victim core to enter the lockout state, then the victim core will be ready to exit the lockout state very quickly. |
Parameters
timeout_us | the timeout in microseconds |
Returns
true if the other core successfully exited locked out state within the timeout, false otherwise
◆ multicore_lockout_start_blocking()
void multicore_lockout_start_blocking | ( | void | ) |
Request the other core to pause in a known state and wait for it to do so.
The other (victim) core must have previously executed multicore_lockout_victim_init()
Note |
multicore_lockout_start_ functions are not nestable, and must be paired with a call to a corresponding multicore_lockout_end_blocking |
◆ multicore_lockout_start_timeout_us()
bool multicore_lockout_start_timeout_us | ( | uint64_t | timeout_us | ) |
Request the other core to pause in a known state and wait up to a time limit for it to do so.
The other core must have previously executed multicore_lockout_victim_init()
Note |
multicore_lockout_start_ functions are not nestable, and must be paired with a call to a corresponding multicore_lockout_end_blocking |
Parameters
timeout_us | the timeout in microseconds |
Returns
true if the other core entered the locked out state within the timeout, false otherwise
◆ multicore_lockout_victim_init()
void multicore_lockout_victim_init | ( | void | ) |
Initialize the current core such that it can be a "victim" of lockout (i.e. forced to pause in a known state by the other core)
This code hooks the intercore FIFO IRQ, and the FIFO may not be used for any other purpose after this.
pico_i2c_slave
Typedefs
-
typedef enum i2c_slave_event_t i2c_slave_event_t
-
I2C slave event types.
-
typedef void(* i2c_slave_handler_t) (i2c_inst_t *i2c, i2c_slave_event_t event)
-
I2C slave event handler.
Enumerations
-
enum i2c_slave_event_t { I2C_SLAVE_RECEIVE , I2C_SLAVE_REQUEST , I2C_SLAVE_FINISH }
-
I2C slave event types. More...
Functions
-
void i2c_slave_init (i2c_inst_t *i2c, uint8_t address, i2c_slave_handler_t handler)
-
Configure an I2C instance for slave mode.
-
void i2c_slave_deinit (i2c_inst_t *i2c)
-
Restore an I2C instance to master mode.
Detailed Description
Functions providing an interrupt driven I2C slave interface.
This I2C slave helper library configures slave mode and hooks the relevant I2C IRQ so that a user supplied handler is called with enumerated I2C events.
An example application slave_mem_i2c
, which makes use of this library, can be found in pico_examples.
Typedef Documentation
◆ i2c_slave_handler_t
typedef void(* i2c_slave_handler_t) (i2c_inst_t *i2c, i2c_slave_event_t event) |
I2C slave event handler.
The event handler will run from the I2C ISR, so it should return quickly (under 25 us at 400 kb/s). Avoid blocking inside the handler and split large data transfers across multiple calls for best results. When sending data to master, up to i2c_get_write_available() bytes can be written without blocking. When receiving data from master, up to 2c_get_read_available() bytes can be read without blocking.
Parameters
i2c | Either i2c0 or i2c1 |
event | Event type. |
Enumeration Type Documentation
◆ i2c_slave_event_t
enum i2c_slave_event_t |
I2C slave event types.
Function Documentation
◆ i2c_slave_init()
void i2c_slave_init | ( | i2c_inst_t * | i2c, |
uint8_t | address, | ||
i2c_slave_handler_t | handler | ||
) |
Configure an I2C instance for slave mode.
Parameters
i2c | I2C instance. |
address | 7-bit slave address. |
handler | Callback for events from I2C master. It will run from the I2C ISR, on the CPU core where the slave was initialised. |
pico_rand
Functions
-
void get_rand_128 (rng_128_t *rand128)
-
Get 128-bit random number.
-
uint64_t get_rand_64 (void)
-
Get 64-bit random number.
-
uint32_t get_rand_32 (void)
-
Get 32-bit random number.
Detailed Description
Random Number Generator API
This module generates random numbers at runtime utilizing a number of possible entropy sources and uses those sources to modify the state of a 128-bit 'Pseudo Random Number Generator' implemented in software.
The random numbers (32 to 128 bit) to be supplied are read from the PRNG which is used to help provide a large number space.
The following (multiple) sources of entropy are available (of varying quality), each enabled by a #define:
-
The Ring Oscillator (ROSC) (PICO_RAND_ENTROPY_SRC_ROSC == 1): PICO_RAND_ROSC_BIT_SAMPLE_COUNT bits are gathered from the ring oscillator "random bit" and mixed in each time. This should not be used if the ROSC is off, or the processor is running from the ROSC.
Notethe maximum throughput of ROSC bit sampling is controlled by PICO_RAND_MIN_ROSC_BIT_SAMPLE_TIME_US which defaults to 10us, i.e. 100,000 bits per second. Time (PICO_RAND_ENTROPY_SRC_TIME == 1): The 64-bit microsecond timer is mixed in each time.
Bus Performance Counter (PICO_RAND_ENTROPY_SRC_BUS_PERF_COUNTER == 1): One of the bus fabric's performance counters is mixed in each time.
Note |
All entropy sources are hashed before application to the PRNG state machine. The first time a random number is requested, the 128-bit PRNG state must be seeded. Multiple entropy sources are also available for the seeding operation: |
The Ring Oscillator (ROSC) (PICO_RAND_SEED_ENTROPY_SRC_ROSC == 1): 64 bits are gathered from the ring oscillator "random bit" and mixed into the seed.
Time (PICO_RAND_SEED_ENTROPY_SRC_TIME == 1): The 64-bit microsecond timer is mixed into the seed.
Board Identifier (PICO_RAND_SEED_ENTROPY_SRC_BOARD_ID == 1): The board id via pico_get_unique_board_id is mixed into the seed.
RAM hash (PICO_RAND_SEED_ENTROPY_SRC_RAM_HASH (PICO_RAND_SEED_ENTROPY_SRC_RAM_HASH): The hashed contents of a subset of RAM are mixed in. Initial RAM contents are undefined on power up, so provide a reasonable source of entropy. By default the last 1K of RAM (which usually contains the core 0 stack) is hashed, which may also provide for differences after each warm reset.
With default settings, the seed generation takes approximately 1 millisecond while subsequent random numbers generally take between 10 and 20 microseconds to generate.
pico_rand methods may be safely called from either core or from an IRQ, but be careful in the latter case as the calls may block for a number of microseconds waiting on more entropy.
Function Documentation
◆ get_rand_128()
void get_rand_128 | ( | rng_128_t * | rand128 | ) |
Get 128-bit random number.
This method may be safely called from either core or from an IRQ, but be careful in the latter case as the call may block for a number of microseconds waiting on more entropy.
Parameters
rand128 | Pointer to storage to accept a 128-bit random number |
pico_stdlib
Functions
-
void setup_default_uart (void)
-
Set up the default UART and assign it to the default GPIO's.
-
void set_sys_clock_48mhz (void)
-
Initialise the system clock to 48MHz.
-
void set_sys_clock_pll (uint32_t vco_freq, uint post_div1, uint post_div2)
-
Initialise the system clock.
-
bool check_sys_clock_khz (uint32_t freq_khz, uint *vco_freq_out, uint *post_div1_out, uint *post_div2_out)
-
Check if a given system clock frequency is valid/attainable.
-
static bool set_sys_clock_khz (uint32_t freq_khz, bool required)
-
Attempt to set a system clock frequency in khz.
Detailed Description
Aggregation of a core subset of Raspberry Pi Pico SDK libraries used by most executables along with some additional utility methods. Including pico_stdlib gives you everything you need to get a basic program running which prints to stdout or flashes a LED
This library aggregates:
There are some basic default values used by these functions that will default to usable values, however, they can be customised in a board definition header via config.h or similar
Function Documentation
◆ check_sys_clock_khz()
bool check_sys_clock_khz | ( | uint32_t | freq_khz, |
uint * | vco_freq_out, | ||
uint * | post_div1_out, | ||
uint * | post_div2_out | ||
) |
Check if a given system clock frequency is valid/attainable.
Parameters
freq_khz | Requested frequency |
vco_freq_out | On success, the voltage controller oscillator frequeucny to be used by the SYS PLL |
post_div1_out | On success, The first post divider for the SYS PLL |
post_div2_out | On success, The second post divider for the SYS PLL. |
Returns
true if the frequency is possible and the output parameters have been written.
◆ set_sys_clock_48mhz()
void set_sys_clock_48mhz | ( | void | ) |
Initialise the system clock to 48MHz.
Set the system clock to 48MHz, and set the peripheral clock to match.
◆ set_sys_clock_khz()
|
inlinestatic |
Attempt to set a system clock frequency in khz.
Note that not all clock frequencies are possible; it is preferred that you use src/rp2_common/hardware_clocks/scripts/vcocalc.py to calculate the parameters for use with set_sys_clock_pll
Parameters
freq_khz | Requested frequency |
required | if true then this function will assert if the frequency is not attainable. |
Returns
true if the clock was configured
◆ set_sys_clock_pll()
void set_sys_clock_pll | ( | uint32_t | vco_freq, |
uint | post_div1, | ||
uint | post_div2 | ||
) |
Initialise the system clock.
Parameters
vco_freq | The voltage controller oscillator frequency to be used by the SYS PLL |
post_div1 | The first post divider for the SYS PLL |
post_div2 | The second post divider for the SYS PLL. |
See the PLL documentation in the datasheet for details of driving the PLLs.
◆ setup_default_uart()
void setup_default_uart | ( | void | ) |
Set up the default UART and assign it to the default GPIO's.
By default this will use UART 0, with TX to pin GPIO 0, RX to pin GPIO 1, and the baudrate to 115200
Calling this method also initializes stdin/stdout over UART if the pico_stdio_uart library is linked.
Defaults can be changed using configuration defines, PICO_DEFAULT_UART_INSTANCE, PICO_DEFAULT_UART_BAUD_RATE PICO_DEFAULT_UART_TX_PIN PICO_DEFAULT_UART_RX_PIN
pico_sync
Modules
-
Critical Section API for short-lived mutual exclusion safe for IRQ and multi-core.
-
base synchronization/lock primitive support
-
Mutex API for non IRQ mutual exclusion between cores.
-
Semaphore API for restricting access to a resource.
critical_section
Critical Section API for short-lived mutual exclusion safe for IRQ and multi-core. More...
Functions
-
void critical_section_init (critical_section_t *crit_sec)
-
Initialise a critical_section structure allowing the system to assign a spin lock number.
-
void critical_section_init_with_lock_num (critical_section_t *crit_sec, uint lock_num)
-
Initialise a critical_section structure assigning a specific spin lock number.
-
static void critical_section_enter_blocking (critical_section_t *crit_sec)
-
Enter a critical_section.
-
static void critical_section_exit (critical_section_t *crit_sec)
-
Release a critical_section.
-
void critical_section_deinit (critical_section_t *crit_sec)
-
De-Initialise a critical_section created by the critical_section_init method.
Detailed Description
Critical Section API for short-lived mutual exclusion safe for IRQ and multi-core.
A critical section is non-reentrant, and provides mutual exclusion using a spin-lock to prevent access from the other core, and from (higher priority) interrupts on the same core. It does the former using a spin lock and the latter by disabling interrupts on the calling core.
Because interrupts are disabled when a critical_section is owned, uses of the critical_section should be as short as possible.
Function Documentation
◆ critical_section_deinit()
void critical_section_deinit | ( | critical_section_t * | crit_sec | ) |
De-Initialise a critical_section created by the critical_section_init method.
This method is only used to free the associated spin lock allocated via the critical_section_init method (it should not be used to de-initialize a spin lock created via critical_section_init_with_lock_num). After this call, the critical section is invalid
Parameters
crit_sec | Pointer to critical_section structure |
◆ critical_section_enter_blocking()
|
inlinestatic |
Enter a critical_section.
If the spin lock associated with this critical section is in use, then this method will block until it is released.
Parameters
crit_sec | Pointer to critical_section structure |
◆ critical_section_exit()
|
inlinestatic |
◆ critical_section_init()
void critical_section_init | ( | critical_section_t * | crit_sec | ) |
Initialise a critical_section structure allowing the system to assign a spin lock number.
The critical section is initialized ready for use, and will use a (possibly shared) spin lock number assigned by the system. Note that in general it is unlikely that you would be nesting critical sections, however if you do so you must use critical_section_init_with_lock_num to ensure that the spin lock's used are different.
Parameters
crit_sec | Pointer to critical_section structure |
◆ critical_section_init_with_lock_num()
void critical_section_init_with_lock_num | ( | critical_section_t * | crit_sec, |
uint | lock_num | ||
) |
Initialise a critical_section structure assigning a specific spin lock number.
Parameters
crit_sec | Pointer to critical_section structure |
lock_num | the specific spin lock number to use |
lock_core
base synchronization/lock primitive support More...
Files
-
file lock_core.h
Macros
-
type to use to store the 'owner' of a lock.By default this is int8_t as it only needs to store the core number or -1, however it may be overridden if a larger type is required (e.g. for an RTOS task id)
-
#define LOCK_INVALID_OWNER_ID ((lock_owner_id_t)-1)
-
marker value to use for a lock_owner_id_t which does not refer to any valid owner
-
#define lock_get_caller_owner_id() ((lock_owner_id_t)get_core_num())
-
return the owner id for the callerBy default this returns the calling core number, but may be overridden (e.g. to return an RTOS task id)
-
#define lock_internal_spin_unlock_with_wait(lock, save) spin_unlock((lock)->spin_lock, save), __wfe()
-
Atomically unlock the lock's spin lock, and wait for a notification.
-
#define lock_internal_spin_unlock_with_notify(lock, save) spin_unlock((lock)->spin_lock, save), __sev()
-
Atomically unlock the lock's spin lock, and send a notification.
-
#define lock_internal_spin_unlock_with_best_effort_wait_or_timeout(lock, save, until)
-
Atomically unlock the lock's spin lock, and wait for a notification or a timeout.
-
#define sync_internal_yield_until_before(until) ((void)0)
-
yield to other processing until some time before the requested time
Detailed Description
base synchronization/lock primitive support
Most of the pico_sync locking primitives contain a lock_core_t structure member. This currently just holds a spin lock which is used only to protect the contents of the rest of the structure as part of implementing the synchronization primitive. As such, the spin_lock member of lock core is never still held on return from any function for the primitive.
critical_section is an exceptional case in that it does not have a lock_core_t and simply wraps a spin lock, providing methods to lock and unlock said spin lock.
lock_core based structures work by locking the spin lock, checking state, and then deciding whether they additionally need to block or notify when the spin lock is released. In the blocking case, they will wake up again in the future, and try the process again.
By default the SDK just uses the processors' events via SEV and WEV for notification and blocking as these are sufficient for cross core, and notification from interrupt handlers. However macros are defined in this file that abstract the wait and notify mechanisms to allow the SDK locking functions to effectively be used within an RTOS or other environment.
When implementing an RTOS, it is desirable for the SDK synchronization primitives that wait, to block the calling task (and immediately yield), and those that notify, to wake a blocked task which isn't on processor. At least the wait macro implementation needs to be atomic with the protecting spin_lock unlock from the callers point of view; i.e. the task should unlock the spin lock when it starts its wait. Such implementation is up to the RTOS integration, however the macros are defined such that such operations are always combined into a single call (so they can be perfomed atomically) even though the default implementation does not need this, as a WFE which starts following the corresponding SEV is not missed.
Macro Definition Documentation
◆ lock_internal_spin_unlock_with_best_effort_wait_or_timeout
#define lock_internal_spin_unlock_with_best_effort_wait_or_timeout | ( | lock, | |
save, | |||
until | |||
) |
Atomically unlock the lock's spin lock, and wait for a notification or a timeout.
Atomic here refers to the fact that it should not be possible for a concurrent lock_internal_spin_unlock_with_notify to insert itself between the spin unlock and this wait in a way that the wait does not see the notification (i.e. causing a missed notification). In other words this method should always wake up in response to a lock_internal_spin_unlock_with_notify for the same lock, which completes after this call starts.
In an ideal implementation, this method would return exactly after the corresponding lock_internal_spin_unlock_with_notify has subsequently been called on the same lock instance or the timeout has been reached, however this method is free to return at any point before that; this macro is always used in a loop which locks the spin lock, checks the internal locking primitive state and then waits again if the calling thread should not proceed.
By default this simply unlocks the spin lock, and then calls best_effort_wfe_or_timeout but may be overridden (e.g. to actually block the RTOS task with a timeout).
Parameters
lock | the lock_core for the primitive which needs to block |
save | the uint32_t value that should be passed to spin_unlock when the spin lock is unlocked. (i.e. the PRIMASK state when the spin lock was acquire) |
until | the absolute_time_t value |
Returns
true if the timeout has been reached
◆ lock_internal_spin_unlock_with_notify
#define lock_internal_spin_unlock_with_notify | ( | lock, | |
save | |||
) | spin_unlock((lock)->spin_lock, save), __sev() |
Atomically unlock the lock's spin lock, and send a notification.
Atomic here refers to the fact that it should not be possible for this notification to happen during a lock_internal_spin_unlock_with_wait in a way that that wait does not see the notification (i.e. causing a missed notification). In other words this method should always wake up any lock_internal_spin_unlock_with_wait which started before this call completes.
In an ideal implementation, this method would wake up only the corresponding lock_internal_spin_unlock_with_wait that has been called on the same lock instance, however it is free to wake up any of them, as they will check their condition and then re-wait if necessary/
By default this macro simply unlocks the spin lock, and then performs a SEV, but may be overridden (e.g. to actually un-block RTOS task(s)).
Parameters
lock | the lock_core for the primitive which needs to block |
save | the uint32_t value that should be passed to spin_unlock when the spin lock is unlocked. (i.e. the PRIMASK state when the spin lock was acquire) |
◆ lock_internal_spin_unlock_with_wait
#define lock_internal_spin_unlock_with_wait | ( | lock, | |
save | |||
) | spin_unlock((lock)->spin_lock, save), __wfe() |
Atomically unlock the lock's spin lock, and wait for a notification.
Atomic here refers to the fact that it should not be possible for a concurrent lock_internal_spin_unlock_with_notify to insert itself between the spin unlock and this wait in a way that the wait does not see the notification (i.e. causing a missed notification). In other words this method should always wake up in response to a lock_internal_spin_unlock_with_notify for the same lock, which completes after this call starts.
In an ideal implementation, this method would return exactly after the corresponding lock_internal_spin_unlock_with_notify has subsequently been called on the same lock instance, however this method is free to return at any point before that; this macro is always used in a loop which locks the spin lock, checks the internal locking primitive state and then waits again if the calling thread should not proceed.
By default this macro simply unlocks the spin lock, and then performs a WFE, but may be overridden (e.g. to actually block the RTOS task).
Parameters
lock | the lock_core for the primitive which needs to block |
save | the uint32_t value that should be passed to spin_unlock when the spin lock is unlocked. (i.e. the PRIMASK state when the spin lock was acquire |
◆ sync_internal_yield_until_before
#define sync_internal_yield_until_before | ( | until | ) | ((void)0) |
yield to other processing until some time before the requested time
This method is provided for cases where the caller has no useful work to do until the specified time.
By default this method does nothing, however it can be overridden (for example by an RTOS which is able to block the current task until the scheduler tick before the given time)
Parameters
until | the absolute_time_t value |
Function Documentation
◆ lock_init()
void lock_init | ( | lock_core_t * | core, |
uint | lock_num | ||
) |
Initialise a lock structure.
Inititalize a lock structure, providing the spin lock number to use for protecting internal state.
Parameters
core | Pointer to the lock_core to initialize |
lock_num | Spin lock number to use for the lock. As the spin lock is only used internally to the locking primitive method implementations, this does not need to be globally unique, however could suffer contention |
mutex
Mutex API for non IRQ mutual exclusion between cores. More...
Data Structures
-
struct __packed_aligned
-
recursive mutex instance More...
-
struct mutex
-
regular (non recursive) mutex instance More...
Macros
-
#define auto_init_mutex(name) static __attribute__((section(".mutex_array"))) mutex_t name
-
Helper macro for static definition of mutexes.
-
#define auto_init_recursive_mutex(name) static __attribute__((section(".mutex_array"))) recursive_mutex_t name = { .core = { .spin_lock = (spin_lock_t *)1 /* marker for runtime_init */ }, .owner = 0, .enter_count = 0 }
-
Helper macro for static definition of recursive mutexes.
Typedefs
-
typedef struct __packed_aligned recursive_mutex_t
-
recursive mutex instance
-
typedef struct __packed_aligned mutex mutex_t
-
regular (non recursive) mutex instance
Functions
-
static bool critical_section_is_initialized (critical_section_t *crit_sec)
-
Test whether a critical_section has been initialized.
-
void mutex_init (mutex_t *mtx)
-
Initialise a mutex structure.
-
void recursive_mutex_init (recursive_mutex_t *mtx)
-
Initialise a recursive mutex structure.
-
void mutex_enter_blocking (mutex_t *mtx)
-
Take ownership of a mutex.
-
void recursive_mutex_enter_blocking (recursive_mutex_t *mtx)
-
Take ownership of a recursive mutex.
-
bool mutex_try_enter (mutex_t *mtx, uint32_t *owner_out)
-
Attempt to take ownership of a mutex.
-
bool mutex_try_enter_block_until (mutex_t *mtx, absolute_time_t until)
-
Attempt to take ownership of a mutex until the specified time.
-
bool recursive_mutex_try_enter (recursive_mutex_t *mtx, uint32_t *owner_out)
-
Attempt to take ownership of a recursive mutex.
-
bool mutex_enter_timeout_ms (mutex_t *mtx, uint32_t timeout_ms)
-
Wait for mutex with timeout.
-
bool recursive_mutex_enter_timeout_ms (recursive_mutex_t *mtx, uint32_t timeout_ms)
-
Wait for recursive mutex with timeout.
-
bool mutex_enter_timeout_us (mutex_t *mtx, uint32_t timeout_us)
-
Wait for mutex with timeout.
-
bool recursive_mutex_enter_timeout_us (recursive_mutex_t *mtx, uint32_t timeout_us)
-
Wait for recursive mutex with timeout.
-
bool mutex_enter_block_until (mutex_t *mtx, absolute_time_t until)
-
Wait for mutex until a specific time.
-
bool recursive_mutex_enter_block_until (recursive_mutex_t *mtx, absolute_time_t until)
-
Wait for mutex until a specific time.
-
void mutex_exit (mutex_t *mtx)
-
Release ownership of a mutex.
-
void recursive_mutex_exit (recursive_mutex_t *mtx)
-
Release ownership of a recursive mutex.
-
static bool mutex_is_initialized (mutex_t *mtx)
-
Test for mutex initialized state.
-
static bool recursive_mutex_is_initialized (recursive_mutex_t *mtx)
-
Test for recursive mutex initialized state.
Detailed Description
Mutex API for non IRQ mutual exclusion between cores.
Mutexes are application level locks usually used protecting data structures that might be used by multiple threads of execution. Unlike critical sections, the mutex protected code is not necessarily required/expected to complete quickly, as no other sytem wide locks are held on account of an acquired mutex.
When acquired, the mutex has an owner (see lock_get_caller_owner_id) which with the plain SDK is just the acquiring core, but in an RTOS it could be a task, or an IRQ handler context.
Two variants of mutex are provided; mutex_t (and associated mutex_ functions) is a regular mutex that cannot be acquired recursively by the same owner (a deadlock will occur if you try). recursive_mutex_t (and associated recursive_mutex_ functions) is a recursive mutex that can be recursively obtained by the same caller, at the expense of some more overhead when acquiring and releasing.
It is generally a bad idea to call blocking mutex_ or recursive_mutex_ functions from within an IRQ handler. It is valid to call mutex_try_enter or recursive_mutex_try_enter from within an IRQ handler, if the operation that would be conducted under lock can be skipped if the mutex is locked (at least by the same owner).
NOTE: For backwards compatibility with version 1.2.0 of the SDK, if the define PICO_MUTEX_ENABLE_SDK120_COMPATIBILITY is set to 1, then the the regular mutex_ functions may also be used for recursive mutexes. This flag will be removed in a future version of the SDK.
See critical_section.h for protecting access between multiple cores AND IRQ handlers
Macro Definition Documentation
◆ auto_init_mutex
#define auto_init_mutex | ( | name | ) | static __attribute__((section(".mutex_array"))) mutex_t name |
Helper macro for static definition of mutexes.
A mutex defined as follows:
Is equivalent to doing
But the initialization of the mutex is performed automatically during runtime initialization
◆ auto_init_recursive_mutex
#define auto_init_recursive_mutex | ( | name | ) | static __attribute__((section(".mutex_array"))) recursive_mutex_t name = { .core = { .spin_lock = (spin_lock_t *)1 /* marker for runtime_init */ }, .owner = 0, .enter_count = 0 } |
Helper macro for static definition of recursive mutexes.
A recursive mutex defined as follows:
Is equivalent to doing
But the initialization of the mutex is performed automatically during runtime initialization
Function Documentation
◆ critical_section_is_initialized()
|
inlinestatic |
Test whether a critical_section has been initialized.
Parameters
crit_sec | Pointer to critical_section structure |
Returns
true if the critical section is initialized, false otherwise
◆ mutex_enter_block_until()
bool mutex_enter_block_until | ( | mutex_t * | mtx, |
absolute_time_t | until | ||
) |
Wait for mutex until a specific time.
Wait until the specific time to take ownership of the mutex. If the caller can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx | Pointer to mutex structure |
until | The time after which to return if the caller cannot be granted ownership of the mutex |
Returns
true if mutex now owned, false if timeout occurred before ownership could be granted
◆ mutex_enter_blocking()
void mutex_enter_blocking | ( | mutex_t * | mtx | ) |
Take ownership of a mutex.
This function will block until the caller can be granted ownership of the mutex. On return the caller owns the mutex
Parameters
mtx | Pointer to mutex structure |
◆ mutex_enter_timeout_ms()
bool mutex_enter_timeout_ms | ( | mutex_t * | mtx, |
uint32_t | timeout_ms | ||
) |
Wait for mutex with timeout.
Wait for up to the specific time to take ownership of the mutex. If the caller can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx | Pointer to mutex structure |
timeout_ms | The timeout in milliseconds. |
Returns
true if mutex now owned, false if timeout occurred before ownership could be granted
◆ mutex_enter_timeout_us()
bool mutex_enter_timeout_us | ( | mutex_t * | mtx, |
uint32_t | timeout_us | ||
) |
Wait for mutex with timeout.
Wait for up to the specific time to take ownership of the mutex. If the caller can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx | Pointer to mutex structure |
timeout_us | The timeout in microseconds. |
Returns
true if mutex now owned, false if timeout occurred before ownership could be granted
◆ mutex_exit()
void mutex_exit | ( | mutex_t * | mtx | ) |
Release ownership of a mutex.
Parameters
mtx | Pointer to mutex structure |
◆ mutex_init()
void mutex_init | ( | mutex_t * | mtx | ) |
Initialise a mutex structure.
Parameters
mtx | Pointer to mutex structure |
◆ mutex_is_initialized()
|
inlinestatic |
Test for mutex initialized state.
Parameters
mtx | Pointer to mutex structure |
Returns
true if the mutex is initialized, false otherwise
◆ mutex_try_enter()
bool mutex_try_enter | ( | mutex_t * | mtx, |
uint32_t * | owner_out | ||
) |
Attempt to take ownership of a mutex.
If the mutex wasn't owned, this will claim the mutex for the caller and return true. Otherwise (if the mutex was already owned) this will return false and the caller will NOT own the mutex.
Parameters
mtx | Pointer to mutex structure |
owner_out | If mutex was already owned, and this pointer is non-zero, it will be filled in with the owner id of the current owner of the mutex |
Returns
true if mutex now owned, false otherwise
◆ mutex_try_enter_block_until()
bool mutex_try_enter_block_until | ( | mutex_t * | mtx, |
absolute_time_t | until | ||
) |
Attempt to take ownership of a mutex until the specified time.
If the mutex wasn't owned, this method will immediately claim the mutex for the caller and return true. If the mutex is owned by the caller, this method will immediately return false, If the mutex is owned by someone else, this method will try to claim it until the specified time, returning true if it succeeds, or false on timeout
Parameters
mtx | Pointer to mutex structure |
until | The time after which to return if the caller cannot be granted ownership of the mutex |
Returns
true if mutex now owned, false otherwise
◆ recursive_mutex_enter_block_until()
bool recursive_mutex_enter_block_until | ( | recursive_mutex_t * | mtx, |
absolute_time_t | until | ||
) |
Wait for mutex until a specific time.
Wait until the specific time to take ownership of the mutex. If the caller already has ownership of the mutex or can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx | Pointer to recursive mutex structure |
until | The time after which to return if the caller cannot be granted ownership of the mutex |
Returns
true if the recursive mutex (now) owned, false if timeout occurred before ownership could be granted
◆ recursive_mutex_enter_blocking()
void recursive_mutex_enter_blocking | ( | recursive_mutex_t * | mtx | ) |
Take ownership of a recursive mutex.
This function will block until the caller can be granted ownership of the mutex. On return the caller owns the mutex
Parameters
mtx | Pointer to recursive mutex structure |
◆ recursive_mutex_enter_timeout_ms()
bool recursive_mutex_enter_timeout_ms | ( | recursive_mutex_t * | mtx, |
uint32_t | timeout_ms | ||
) |
Wait for recursive mutex with timeout.
Wait for up to the specific time to take ownership of the recursive mutex. If the caller already has ownership of the mutex or can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx | Pointer to recursive mutex structure |
timeout_ms | The timeout in milliseconds. |
Returns
true if the recursive mutex (now) owned, false if timeout occurred before ownership could be granted
◆ recursive_mutex_enter_timeout_us()
bool recursive_mutex_enter_timeout_us | ( | recursive_mutex_t * | mtx, |
uint32_t | timeout_us | ||
) |
Wait for recursive mutex with timeout.
Wait for up to the specific time to take ownership of the recursive mutex. If the caller already has ownership of the mutex or can be granted ownership of the mutex before the timeout expires, then true will be returned and the caller will own the mutex, otherwise false will be returned and the caller will NOT own the mutex.
Parameters
mtx | Pointer to mutex structure |
timeout_us | The timeout in microseconds. |
Returns
true if the recursive mutex (now) owned, false if timeout occurred before ownership could be granted
◆ recursive_mutex_exit()
void recursive_mutex_exit | ( | recursive_mutex_t * | mtx | ) |
Release ownership of a recursive mutex.
Parameters
mtx | Pointer to recursive mutex structure |
◆ recursive_mutex_init()
void recursive_mutex_init | ( | recursive_mutex_t * | mtx | ) |
Initialise a recursive mutex structure.
A recursive mutex may be entered in a nested fashion by the same owner
Parameters
mtx | Pointer to recursive mutex structure |
◆ recursive_mutex_is_initialized()
|
inlinestatic |
Test for recursive mutex initialized state.
Parameters
mtx | Pointer to recursive mutex structure |
Returns
true if the recursive mutex is initialized, false otherwise
◆ recursive_mutex_try_enter()
bool recursive_mutex_try_enter | ( | recursive_mutex_t * | mtx, |
uint32_t * | owner_out | ||
) |
Attempt to take ownership of a recursive mutex.
If the mutex wasn't owned or was owned by the caller, this will claim the mutex and return true. Otherwise (if the mutex was already owned by another owner) this will return false and the caller will NOT own the mutex.
Parameters
mtx | Pointer to recursive mutex structure |
owner_out | If mutex was already owned by another owner, and this pointer is non-zero, it will be filled in with the owner id of the current owner of the mutex |
Returns
true if the recursive mutex (now) owned, false otherwise
sem
Semaphore API for restricting access to a resource. More...
Functions
-
void sem_init (semaphore_t *sem, int16_t initial_permits, int16_t max_permits)
-
Initialise a semaphore structure.
-
int sem_available (semaphore_t *sem)
-
Return number of available permits on the semaphore.
-
bool sem_release (semaphore_t *sem)
-
Release a permit on a semaphore.
-
void sem_reset (semaphore_t *sem, int16_t permits)
-
Reset semaphore to a specific number of available permits.
-
void sem_acquire_blocking (semaphore_t *sem)
-
Acquire a permit from the semaphore.
-
bool sem_acquire_timeout_ms (semaphore_t *sem, uint32_t timeout_ms)
-
Acquire a permit from a semaphore, with timeout.
-
bool sem_acquire_timeout_us (semaphore_t *sem, uint32_t timeout_us)
-
Acquire a permit from a semaphore, with timeout.
-
bool sem_acquire_block_until (semaphore_t *sem, absolute_time_t until)
-
Wait to acquire a permit from a semaphore until a specific time.
-
bool sem_try_acquire (semaphore_t *sem)
-
Attempt to acquire a permit from a semaphore without blocking.
Detailed Description
Semaphore API for restricting access to a resource.
A semaphore holds a number of available permits. sem_acquire
methods will acquire a permit if available (reducing the available count by 1) or block if the number of available permits is 0. sem_release() increases the number of available permits by one potentially unblocking a sem_acquire
method.
Note that sem_release() may be called an arbitrary number of times, however the number of available permits is capped to the max_permit value specified during semaphore initialization.
Although these semaphore related functions can be used from IRQ handlers, it is obviously preferable to only release semaphores from within an IRQ handler (i.e. avoid blocking)
Function Documentation
◆ sem_acquire_block_until()
bool sem_acquire_block_until | ( | semaphore_t * | sem, |
absolute_time_t | until | ||
) |
Wait to acquire a permit from a semaphore until a specific time.
This function will block and wait if no permits are available, until the specified timeout time. If the timeout is reached the function will return false, otherwise it will return true.
Parameters
sem | Pointer to semaphore structure |
until | The time after which to return if the sem is not available. |
Returns
true if permit was acquired, false if the until time was reached before acquiring.
◆ sem_acquire_blocking()
void sem_acquire_blocking | ( | semaphore_t * | sem | ) |
Acquire a permit from the semaphore.
This function will block and wait if no permits are available.
Parameters
sem | Pointer to semaphore structure |
◆ sem_acquire_timeout_ms()
bool sem_acquire_timeout_ms | ( | semaphore_t * | sem, |
uint32_t | timeout_ms | ||
) |
Acquire a permit from a semaphore, with timeout.
This function will block and wait if no permits are available, until the defined timeout has been reached. If the timeout is reached the function will return false, otherwise it will return true.
Parameters
sem | Pointer to semaphore structure |
timeout_ms | Time to wait to acquire the semaphore, in milliseconds. |
Returns
false if timeout reached, true if permit was acquired.
◆ sem_acquire_timeout_us()
bool sem_acquire_timeout_us | ( | semaphore_t * | sem, |
uint32_t | timeout_us | ||
) |
Acquire a permit from a semaphore, with timeout.
This function will block and wait if no permits are available, until the defined timeout has been reached. If the timeout is reached the function will return false, otherwise it will return true.
Parameters
sem | Pointer to semaphore structure |
timeout_us | Time to wait to acquire the semaphore, in microseconds. |
Returns
false if timeout reached, true if permit was acquired.
◆ sem_available()
int sem_available | ( | semaphore_t * | sem | ) |
Return number of available permits on the semaphore.
Parameters
sem | Pointer to semaphore structure |
Returns
The number of permits available on the semaphore.
◆ sem_init()
void sem_init | ( | semaphore_t * | sem, |
int16_t | initial_permits, | ||
int16_t | max_permits | ||
) |
Initialise a semaphore structure.
Parameters
sem | Pointer to semaphore structure |
initial_permits | How many permits are initially acquired |
max_permits | Total number of permits allowed for this semaphore |
◆ sem_release()
bool sem_release | ( | semaphore_t * | sem | ) |
Release a permit on a semaphore.
Increases the number of permits by one (unless the number of permits is already at the maximum). A blocked sem_acquire
will be released if the number of permits is increased.
Parameters
sem | Pointer to semaphore structure |
Returns
true if the number of permits available was increased.
◆ sem_reset()
void sem_reset | ( | semaphore_t * | sem, |
int16_t | permits | ||
) |
Reset semaphore to a specific number of available permits.
Reset value should be from 0 to the max_permits specified in the init function
Parameters
sem | Pointer to semaphore structure |
permits | the new number of available permits |
◆ sem_try_acquire()
bool sem_try_acquire | ( | semaphore_t * | sem | ) |
Attempt to acquire a permit from a semaphore without blocking.
This function will return false without blocking if no permits are available, otherwise it will acquire a permit and return true.
Parameters
sem | Pointer to semaphore structure |
Returns
true if permit was acquired.
pico_time
Modules
-
Timestamp functions relating to points in time (including the current time)
-
Sleep functions for delaying execution in a lower power state.
-
Alarm functions for scheduling future execution.
-
Repeating Timer functions for simple scheduling of repeated execution.
Detailed Description
API for accurate timestamps, sleeping, and time based callbacks
Note |
The functions defined here provide a much more powerful and user friendly wrapping around the low level hardware timer functionality. For these functions (and any other SDK functionality e.g. timeouts, that relies on them) to work correctly, the hardware timer should not be modified. i.e. it is expected to be monotonically increasing once per microsecond. Fortunately there is no need to modify the hardware timer as any functionality you can think of that isn't already covered here can easily be modelled by adding or subtracting a constant value from the unmodified hardware timer. |
See alsohardware_timer
timestamp
Timestamp functions relating to points in time (including the current time) More...
Functions
-
static uint64_t to_us_since_boot (absolute_time_t t)
-
convert an absolute_time_t into a number of microseconds since boot.
-
static void update_us_since_boot (absolute_time_t *t, uint64_t us_since_boot)
-
update an absolute_time_t value to represent a given number of microseconds since boot
-
static absolute_time_t from_us_since_boot (uint64_t us_since_boot)
-
convert a number of microseconds since boot to an absolute_time_t
-
static absolute_time_t get_absolute_time (void)
-
Return a representation of the current time.
-
static uint32_t to_ms_since_boot (absolute_time_t t)
-
Convert a timestamp into a number of milliseconds since boot.
-
static absolute_time_t delayed_by_us (const absolute_time_t t, uint64_t us)
-
Return a timestamp value obtained by adding a number of microseconds to another timestamp.
-
static absolute_time_t delayed_by_ms (const absolute_time_t t, uint32_t ms)
-
Return a timestamp value obtained by adding a number of milliseconds to another timestamp.
-
static absolute_time_t make_timeout_time_us (uint64_t us)
-
Convenience method to get the timestamp a number of microseconds from the current time.
-
static absolute_time_t make_timeout_time_ms (uint32_t ms)
-
Convenience method to get the timestamp a number of milliseconds from the current time.
-
static int64_t absolute_time_diff_us (absolute_time_t from, absolute_time_t to)
-
Return the difference in microseconds between two timestamps.
-
static absolute_time_t absolute_time_min (absolute_time_t a, absolute_time_t b)
-
Return the earlier of two timestamps.
-
static bool is_at_the_end_of_time (absolute_time_t t)
-
Determine if the given timestamp is "at_the_end_of_time".
-
static bool is_nil_time (absolute_time_t t)
-
Determine if the given timestamp is nil.
Variables
-
const absolute_time_t at_the_end_of_time
-
The timestamp representing the end of time; this is actually not the maximum possible timestamp, but is set to 0x7fffffff_ffffffff microseconds to avoid sign overflows with time arithmetic. This is almost 300,000 years, so should be sufficient.
-
const absolute_time_t nil_time
-
The timestamp representing a null timestamp.
Detailed Description
Timestamp functions relating to points in time (including the current time)
These are functions for dealing with timestamps (i.e. instants in time) represented by the type absolute_time_t. This opaque type is provided to help prevent accidental mixing of timestamps and relative time values.
Function Documentation
◆ absolute_time_diff_us()
|
inlinestatic |
Return the difference in microseconds between two timestamps.
Note |
be careful when diffing against large timestamps (e.g. at_the_end_of_time) as the signed integer may overflow. |
Parameters
from | the first timestamp |
to | the second timestamp |
Returns
the number of microseconds between the two timestamps (positive if to
is after from
except in case of overflow)
◆ absolute_time_min()
|
inlinestatic |
Return the earlier of two timestamps.
Parameters
a | the first timestamp |
b | the second timestamp |
Returns
the earlier of the two timestamps
◆ delayed_by_ms()
|
inlinestatic |
Return a timestamp value obtained by adding a number of milliseconds to another timestamp.
Parameters
t | the base timestamp |
ms | the number of milliseconds to add |
Returns
the timestamp representing the resulting time
◆ delayed_by_us()
|
inlinestatic |
Return a timestamp value obtained by adding a number of microseconds to another timestamp.
Parameters
t | the base timestamp |
us | the number of microseconds to add |
Returns
the timestamp representing the resulting time
◆ from_us_since_boot()
|
inlinestatic |
convert a number of microseconds since boot to an absolute_time_t
fn from_us_since_boot
Parameters
us_since_boot | number of microseconds since boot |
Returns
an absolute time equivalent to us_since_boot
◆ get_absolute_time()
|
inlinestatic |
Return a representation of the current time.
Returns an opaque high fidelity representation of the current time sampled during the call.
Returns
the absolute time (now) of the hardware timer
See alsoabsolute_time_t sleep_until() time_us_64()
◆ is_at_the_end_of_time()
|
inlinestatic |
Determine if the given timestamp is "at_the_end_of_time".
Parameters
t | the timestamp |
Returns
true if the timestamp is at_the_end_of_time
See alsoat_the_end_of_time
◆ is_nil_time()
|
inlinestatic |
Determine if the given timestamp is nil.
Parameters
t | the timestamp |
Returns
true if the timestamp is nil
See alsonil_time
◆ make_timeout_time_ms()
|
inlinestatic |
Convenience method to get the timestamp a number of milliseconds from the current time.
Parameters
ms | the number of milliseconds to add to the current timestamp |
Returns
the future timestamp
◆ make_timeout_time_us()
|
inlinestatic |
Convenience method to get the timestamp a number of microseconds from the current time.
Parameters
us | the number of microseconds to add to the current timestamp |
Returns
the future timestamp
◆ to_ms_since_boot()
|
inlinestatic |
Convert a timestamp into a number of milliseconds since boot.
fn to_ms_since_boot
Parameters
t | an absolute_time_t value to convert |
Returns
the number of milliseconds since boot represented by t
See alsoto_us_since_boot()
◆ to_us_since_boot()
|
inlinestatic |
convert an absolute_time_t into a number of microseconds since boot.
fn to_us_since_boot
Parameters
t | the absolute time to convert |
Returns
a number of microseconds since boot, equivalent to t
◆ update_us_since_boot()
|
inlinestatic |
update an absolute_time_t value to represent a given number of microseconds since boot
fn update_us_since_boot
Parameters
t | the absolute time value to update |
us_since_boot | the number of microseconds since boot to represent. Note this should be representable as a signed 64 bit integer |
sleep
Sleep functions for delaying execution in a lower power state. More...
Functions
-
void sleep_until (absolute_time_t target)
-
Wait until after the given timestamp to return.
-
void sleep_us (uint64_t us)
-
Wait for the given number of microseconds before returning.
-
void sleep_ms (uint32_t ms)
-
Wait for the given number of milliseconds before returning.
-
bool best_effort_wfe_or_timeout (absolute_time_t timeout_timestamp)
-
Helper method for blocking on a timeout.
Detailed Description
Sleep functions for delaying execution in a lower power state.
These functions allow the calling core to sleep. This is a lower powered sleep; waking and re-checking time on every processor event (WFE)
Note |
These functions should not be called from an IRQ handler. Lower powered sleep requires use of the default alarm pool which may be disabled by the PICO_TIME_DEFAULT_ALARM_POOL_DISABLED #define or currently full in which case these functions become busy waits instead. Whilst sleep_ functions are preferable to busy_wait functions from a power perspective, the busy_wait equivalent function may return slightly sooner after the target is reached. |
Function Documentation
◆ best_effort_wfe_or_timeout()
bool best_effort_wfe_or_timeout | ( | absolute_time_t | timeout_timestamp | ) |
Helper method for blocking on a timeout.
This method will return in response to an event (as per __wfe) or when the target time is reached, or at any point before.
This method can be used to implement a lower power polling loop waiting on some condition signalled by an event (__sev()).
This is called best_effort because under certain circumstances (notably the default timer pool being disabled or full) the best effort is simply to return immediately without a __wfe, thus turning the calling code into a busy wait.
Example usage:
Parameters
timeout_timestamp | the timeout time |
Returns
true if the target time is reached, false otherwise
◆ sleep_ms()
void sleep_ms | ( | uint32_t | ms | ) |
Wait for the given number of milliseconds before returning.
Note |
This method attempts to perform a lower power sleep (using WFE) as much as possible. |
Parameters
ms | the number of milliseconds to sleep |
◆ sleep_until()
void sleep_until | ( | absolute_time_t | target | ) |
Wait until after the given timestamp to return.
Note |
This method attempts to perform a lower power (WFE) sleep |
Parameters
target | the time after which to return |
See alsosleep_us() busy_wait_until()
◆ sleep_us()
void sleep_us | ( | uint64_t | us | ) |
Wait for the given number of microseconds before returning.
Note |
This method attempts to perform a lower power (WFE) sleep |
Parameters
us | the number of microseconds to sleep |
See alsobusy_wait_us()
alarm
Alarm functions for scheduling future execution. More...
Macros
-
#define PICO_TIME_DEFAULT_ALARM_POOL_DISABLED 0
-
If 1 then the default alarm pool is disabled (so no hardware alarm is claimed for the pool)
-
Selects which hardware alarm is used for the default alarm pool.
-
#define PICO_TIME_DEFAULT_ALARM_POOL_MAX_TIMERS 16
-
Selects the maximum number of concurrent timers in the default alarm pool.
Typedefs
-
typedef int32_t alarm_id_t
-
The identifier for an alarm.
-
typedef int64_t(* alarm_callback_t) (alarm_id_t id, void *user_data)
-
User alarm callback.
Functions
-
Create the default alarm pool (if not already created or disabled)
-
alarm_pool_t * alarm_pool_get_default (void)
-
The default alarm pool used when alarms are added without specifying an alarm pool, and also used by the SDK to support lower power sleeps and timeouts.
-
alarm_pool_t * alarm_pool_create (uint hardware_alarm_num, uint max_timers)
-
Create an alarm pool.
-
alarm_pool_t * alarm_pool_create_with_unused_hardware_alarm (uint max_timers)
-
Create an alarm pool, claiming an used hardware alarm to back it.
-
uint alarm_pool_hardware_alarm_num (alarm_pool_t *pool)
-
Return the hardware alarm used by an alarm pool.
-
uint alarm_pool_core_num (alarm_pool_t *pool)
-
Return the core number the alarm pool was initialized on (and hence callbacks are called on)
-
void alarm_pool_destroy (alarm_pool_t *pool)
-
Destroy the alarm pool, cancelling all alarms and freeing up the underlying hardware alarm.
-
alarm_id_t alarm_pool_add_alarm_at (alarm_pool_t *pool, absolute_time_t time, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called at a specific time.
-
alarm_id_t alarm_pool_add_alarm_at_force_in_context (alarm_pool_t *pool, absolute_time_t time, alarm_callback_t callback, void *user_data)
-
Add an alarm callback to be called at or after a specific time.
-
static alarm_id_t alarm_pool_add_alarm_in_us (alarm_pool_t *pool, uint64_t us, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in microseconds.
-
static alarm_id_t alarm_pool_add_alarm_in_ms (alarm_pool_t *pool, uint32_t ms, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in milliseconds.
-
bool alarm_pool_cancel_alarm (alarm_pool_t *pool, alarm_id_t alarm_id)
-
Cancel an alarm.
-
static alarm_id_t add_alarm_at (absolute_time_t time, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called at a specific time.
-
static alarm_id_t add_alarm_in_us (uint64_t us, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in microseconds.
-
static alarm_id_t add_alarm_in_ms (uint32_t ms, alarm_callback_t callback, void *user_data, bool fire_if_past)
-
Add an alarm callback to be called after a delay specified in milliseconds.
-
static bool cancel_alarm (alarm_id_t alarm_id)
-
Cancel an alarm from the default alarm pool.
Detailed Description
Alarm functions for scheduling future execution.
Alarms are added to alarm pools, which may hold a certain fixed number of active alarms. Each alarm pool utilizes one of four underlying hardware alarms, thus you may have up to four alarm pools. An alarm pool calls (except when the callback would happen before or during being set) the callback on the core from which the alarm pool was created. Callbacks are called from the hardware alarm IRQ handler, so care must be taken in their implementation.
A default pool is created the core specified by PICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM on core 0, and may be used by the method variants that take no alarm pool parameter.
See alsostruct alarm_pool hardware_timer
Macro Definition Documentation
◆ PICO_TIME_DEFAULT_ALARM_POOL_DISABLED
#define PICO_TIME_DEFAULT_ALARM_POOL_DISABLED 0 |
If 1 then the default alarm pool is disabled (so no hardware alarm is claimed for the pool)
Note |
Setting to 1 may cause some code not to compile as default timer pool related methods are removed When the default alarm pool is disabled, sleep_ methods and timeouts are no longer lower powered (they become busy_wait_) |
See alsoPICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM alarm_pool_get_default()
◆ PICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM
#define PICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM 3 |
Selects which hardware alarm is used for the default alarm pool.
See alsoalarm_pool_get_default()
◆ PICO_TIME_DEFAULT_ALARM_POOL_MAX_TIMERS
#define PICO_TIME_DEFAULT_ALARM_POOL_MAX_TIMERS 16 |
Selects the maximum number of concurrent timers in the default alarm pool.
Note |
For implementation reasons this is limited to PICO_PHEAP_MAX_ENTRIES which defaults to 255 |
See alsoPICO_TIME_DEFAULT_ALARM_POOL_HARDWARE_ALARM_NUM alarm_pool_get_default()
Typedef Documentation
◆ alarm_callback_t
typedef int64_t(* alarm_callback_t) (alarm_id_t id, void *user_data) |
User alarm callback.
Parameters
id | the alarm_id as returned when the alarm was added |
user_data | the user data passed when the alarm was added |
Returns
<0 to reschedule the same alarm this many us from the time the alarm was previously scheduled to fire
>0 to reschedule the same alarm this many us from the time this method returns
0 to not reschedule the alarm
◆ alarm_id_t
typedef int32_t alarm_id_t |
The identifier for an alarm.
Note |
this identifier is signed because -1 is used as an error condition when creating alarms alarm ids may be reused, however for convenience the implementation makes an attempt to defer reusing as long as possible. You should certainly expect it to be hundreds of ids before one is reused, although in most cases it is more. Nonetheless care must still be taken when cancelling alarms or other functionality based on alarms when the alarm may have expired, as eventually the alarm id may be reused for another alarm. |
Function Documentation
◆ add_alarm_at()
|
inlinestatic |
Add an alarm callback to be called at a specific time.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
time | the timestamp when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
fire_if_past | if true, and the alarm time falls before or during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
0 if the alarm time passed before or during the call AND there is no active alarm to return the id of. The latter can either happen because fire_if_past was false (i.e. no timer was ever created), or if the callback was called during this method but the callback cancelled itself by returning 0
-1 if there were no alarm slots available
◆ add_alarm_in_ms()
|
inlinestatic |
Add an alarm callback to be called after a delay specified in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
ms | the delay (from now) in milliseconds when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
fire_if_past | if true, and the alarm time falls during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
0 if the alarm time passed before or during the call AND there is no active alarm to return the id of. The latter can either happen because fire_if_past was false (i.e. no timer was ever created), or if the callback was called during this method but the callback cancelled itself by returning 0
-1 if there were no alarm slots available
◆ add_alarm_in_us()
|
inlinestatic |
Add an alarm callback to be called after a delay specified in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
us | the delay (from now) in microseconds when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
fire_if_past | if true, and the alarm time falls during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
0 if the alarm time passed before or during the call AND there is no active alarm to return the id of. The latter can either happen because fire_if_past was false (i.e. no timer was ever created), or if the callback was called during this method but the callback cancelled itself by returning 0
-1 if there were no alarm slots available
◆ alarm_pool_add_alarm_at()
alarm_id_t alarm_pool_add_alarm_at | ( | alarm_pool_t * | pool, |
absolute_time_t | time, | ||
alarm_callback_t | callback, | ||
void * | user_data, | ||
bool | fire_if_past | ||
) |
Add an alarm callback to be called at a specific time.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool | the alarm pool to use for scheduling the callback (this determines which hardware alarm is used, and which core calls the callback) |
time | the timestamp when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
fire_if_past | if true, and the alarm time falls before or during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id for an active (at the time of return) alarm
0 if the alarm time passed before or during the call AND there is no active alarm to return the id of. The latter can either happen because fire_if_past was false (i.e. no timer was ever created), or if the callback was called during this method but the callback cancelled itself by returning 0
-1 if there were no alarm slots available
◆ alarm_pool_add_alarm_at_force_in_context()
alarm_id_t alarm_pool_add_alarm_at_force_in_context | ( | alarm_pool_t * | pool, |
absolute_time_t | time, | ||
alarm_callback_t | callback, | ||
void * | user_data | ||
) |
Add an alarm callback to be called at or after a specific time.
The callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. Unlike alarm_pool_add_alarm_at, this method guarantees to call the callback from that core even if the time is during this method call or in the past.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool | the alarm pool to use for scheduling the callback (this determines which hardware alarm is used, and which core calls the callback) |
time | the timestamp when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
Returns
>0 the alarm id for an active (at the time of return) alarm
-1 if there were no alarm slots available
◆ alarm_pool_add_alarm_in_ms()
|
inlinestatic |
Add an alarm callback to be called after a delay specified in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool | the alarm pool to use for scheduling the callback (this determines which hardware alarm is used, and which core calls the callback) |
ms | the delay (from now) in milliseconds when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
fire_if_past | if true, and the alarm time falls before or during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
0 if the alarm time passed before or during the call AND there is no active alarm to return the id of. The latter can either happen because fire_if_past was false (i.e. no timer was ever created), or if the callback was called during this method but the callback cancelled itself by returning 0
-1 if there were no alarm slots available
◆ alarm_pool_add_alarm_in_us()
|
inlinestatic |
Add an alarm callback to be called after a delay specified in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool | the alarm pool to use for scheduling the callback (this determines which hardware alarm is used, and which core calls the callback) |
us | the delay (from now) in microseconds when (after which) the callback should fire |
callback | the callback function |
user_data | user data to pass to the callback function |
fire_if_past | if true, and the alarm time falls during this call before the alarm can be set, then the callback should be called during (by) this function instead |
Returns
>0 the alarm id
0 if the alarm time passed before or during the call AND there is no active alarm to return the id of. The latter can either happen because fire_if_past was false (i.e. no timer was ever created), or if the callback was called during this method but the callback cancelled itself by returning 0
-1 if there were no alarm slots available
◆ alarm_pool_cancel_alarm()
bool alarm_pool_cancel_alarm | ( | alarm_pool_t * | pool, |
alarm_id_t | alarm_id | ||
) |
Cancel an alarm.
Parameters
pool | the alarm_pool containing the alarm |
alarm_id | the alarm |
Returns
true if the alarm was cancelled, false if it didn't exist
See alsoalarm_id_t for a note on reuse of IDs
◆ alarm_pool_core_num()
uint alarm_pool_core_num | ( | alarm_pool_t * | pool | ) |
Return the core number the alarm pool was initialized on (and hence callbacks are called on)
Parameters
pool | the pool |
Returns
the core used by the pool
◆ alarm_pool_create()
alarm_pool_t * alarm_pool_create | ( | uint | hardware_alarm_num, |
uint | max_timers | ||
) |
Create an alarm pool.
The alarm pool will call callbacks from an alarm IRQ Handler on the core of this function is called from.
In many situations there is never any need for anything other than the default alarm pool, however you might want to create another if you want alarm callbacks on core 1 or require alarm pools of different priority (IRQ priority based preemption of callbacks)
Note |
This method will hard assert if the hardware alarm is already claimed. |
Parameters
hardware_alarm_num | the hardware alarm to use to back this pool |
max_timers | the maximum number of timers |
Note |
For implementation reasons this is limited to PICO_PHEAP_MAX_ENTRIES which defaults to 255 |
See alsoalarm_pool_get_default() hardware_claiming
◆ alarm_pool_create_with_unused_hardware_alarm()
alarm_pool_t * alarm_pool_create_with_unused_hardware_alarm | ( | uint | max_timers | ) |
Create an alarm pool, claiming an used hardware alarm to back it.
The alarm pool will call callbacks from an alarm IRQ Handler on the core of this function is called from.
In many situations there is never any need for anything other than the default alarm pool, however you might want to create another if you want alarm callbacks on core 1 or require alarm pools of different priority (IRQ priority based preemption of callbacks)
Note |
This method will hard assert if the there is no free hardware to claim. |
Parameters
max_timers | the maximum number of timers |
Note |
For implementation reasons this is limited to PICO_PHEAP_MAX_ENTRIES which defaults to 255 |
See alsoalarm_pool_get_default() hardware_claiming
◆ alarm_pool_destroy()
void alarm_pool_destroy | ( | alarm_pool_t * | pool | ) |
Destroy the alarm pool, cancelling all alarms and freeing up the underlying hardware alarm.
Parameters
pool | the pool |
Returns
the hardware alarm used by the pool
◆ alarm_pool_get_default()
alarm_pool_t * alarm_pool_get_default | ( | void | ) |
The default alarm pool used when alarms are added without specifying an alarm pool, and also used by the SDK to support lower power sleeps and timeouts.
◆ alarm_pool_hardware_alarm_num()
uint alarm_pool_hardware_alarm_num | ( | alarm_pool_t * | pool | ) |
Return the hardware alarm used by an alarm pool.
Parameters
pool | the pool |
Returns
the hardware alarm used by the pool
◆ cancel_alarm()
|
inlinestatic |
Cancel an alarm from the default alarm pool.
Parameters
alarm_id | the alarm |
Returns
true if the alarm was cancelled, false if it didn't exist
See alsoalarm_id_t for a note on reuse of IDs
repeating_timer
Repeating Timer functions for simple scheduling of repeated execution. More...
Data Structures
-
struct repeating_timer
-
Information about a repeating timer. More...
Typedefs
-
typedef bool(* repeating_timer_callback_t) (repeating_timer_t *rt)
-
Callback for a repeating timer.
Functions
-
bool alarm_pool_add_repeating_timer_us (alarm_pool_t *pool, int64_t delay_us, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
-
static bool alarm_pool_add_repeating_timer_ms (alarm_pool_t *pool, int32_t delay_ms, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
-
static bool add_repeating_timer_us (int64_t delay_us, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
-
static bool add_repeating_timer_ms (int32_t delay_ms, repeating_timer_callback_t callback, void *user_data, repeating_timer_t *out)
-
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
-
bool cancel_repeating_timer (repeating_timer_t *timer)
-
Cancel a repeating timer.
Detailed Description
Repeating Timer functions for simple scheduling of repeated execution.
Note |
The regular alarm_ functionality can be used to make repeating alarms (by return non zero from the callback), however these methods abstract that further (at the cost of a user structure to store the repeat delay in (which the alarm framework does not have space for). |
Typedef Documentation
◆ repeating_timer_callback_t
typedef bool(* repeating_timer_callback_t) (repeating_timer_t *rt) |
Callback for a repeating timer.
Parameters
rt | repeating time structure containing information about the repeating time. user_data is of primary important to the user |
Returns
true to continue repeating, false to stop.
Function Documentation
◆ add_repeating_timer_ms()
|
inlinestatic |
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
delay_ms | the repeat delay in milliseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 microsecond |
callback | the repeating timer callback function |
user_data | user data to pass to store in the repeating_timer structure for use by the callback. |
out | the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
◆ add_repeating_timer_us()
|
inlinestatic |
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core of the default alarm pool (generally core 0). If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
delay_us | the repeat delay in microseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 |
callback | the repeating timer callback function |
user_data | user data to pass to store in the repeating_timer structure for use by the callback. |
out | the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
◆ alarm_pool_add_repeating_timer_ms()
|
inlinestatic |
Add a repeating timer that is called repeatedly at the specified interval in milliseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool | the alarm pool to use for scheduling the repeating timer (this determines which hardware alarm is used, and which core calls the callback) |
delay_ms | the repeat delay in milliseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 microsecond |
callback | the repeating timer callback function |
user_data | user data to pass to store in the repeating_timer structure for use by the callback. |
out | the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
◆ alarm_pool_add_repeating_timer_us()
bool alarm_pool_add_repeating_timer_us | ( | alarm_pool_t * | pool, |
int64_t | delay_us, | ||
repeating_timer_callback_t | callback, | ||
void * | user_data, | ||
repeating_timer_t * | out | ||
) |
Add a repeating timer that is called repeatedly at the specified interval in microseconds.
Generally the callback is called as soon as possible after the time specified from an IRQ handler on the core the alarm pool was created on. If the callback is in the past or happens before the alarm setup could be completed, then this method will optionally call the callback itself and then return a return code to indicate that the target time has passed.
Note |
It is safe to call this method from an IRQ handler (including alarm callbacks), and from either core. |
Parameters
pool | the alarm pool to use for scheduling the repeating timer (this determines which hardware alarm is used, and which core calls the callback) |
delay_us | the repeat delay in microseconds; if >0 then this is the delay between one callback ending and the next starting; if <0 then this is the negative of the time between the starts of the callbacks. The value of 0 is treated as 1 |
callback | the repeating timer callback function |
user_data | user data to pass to store in the repeating_timer structure for use by the callback. |
out | the pointer to the user owned structure to store the repeating timer info in. BEWARE this storage location must outlive the repeating timer, so be careful of using stack space |
Returns
false if there were no alarm slots available to create the timer, true otherwise.
◆ cancel_repeating_timer()
bool cancel_repeating_timer | ( | repeating_timer_t * | timer | ) |
Cancel a repeating timer.
Parameters
timer | the repeating timer to cancel |
Returns
true if the repeating timer was cancelled, false if it didn't exist
See alsoalarm_id_t for a note on reuse of IDs
pico_unique_id
Data Structures
-
struct pico_unique_board_id_t
-
Unique board identifier. More...
Functions
-
void pico_get_unique_board_id (pico_unique_board_id_t *id_out)
-
Get unique ID.
-
void pico_get_unique_board_id_string (char *id_out, uint len)
-
Get unique ID in string format.
Detailed Description
Unique device ID access API
RP2040 does not have an on-board unique identifier (all instances of RP2040 silicon are identical and have no persistent state). However, RP2040 boots from serial NOR flash devices which have a 64-bit unique ID as a standard feature, and there is a 1:1 association between RP2040 and flash, so this is suitable for use as a unique identifier for an RP2040-based board.
This library injects a call to the flash_get_unique_id function from the hardware_flash library, to run before main, and stores the result in a static location which can safely be accessed at any time via pico_get_unique_id().
This avoids some pitfalls of the hardware_flash API, which requires any flash-resident interrupt routines to be disabled when called into.
Function Documentation
◆ pico_get_unique_board_id()
void pico_get_unique_board_id | ( | pico_unique_board_id_t * | id_out | ) |
Get unique ID.
Get the unique 64-bit device identifier which was retrieved from the external NOR flash device at boot.
On PICO_NO_FLASH builds the unique identifier is set to all 0xEE.
Parameters
id_out | a pointer to a pico_unique_board_id_t struct, to which the identifier will be written |
◆ pico_get_unique_board_id_string()
void pico_get_unique_board_id_string | ( | char * | id_out, |
uint | len | ||
) |
Get unique ID in string format.
Get the unique 64-bit device identifier which was retrieved from the external NOR flash device at boot, formatted as an ASCII hex string. Will always 0-terminate.
On PICO_NO_FLASH builds the unique identifier is set to all 0xEE.
Parameters
id_out | a pointer to a char buffer of size len, to which the identifier will be written |
len | the size of id_out. For full serial, len >= 2 * PICO_UNIQUE_BOARD_ID_SIZE_BYTES + 1 |
pico_util
Useful data structures and utility functions. More...
datetime
Date/Time formatting. More...
Data Structures
-
struct datetime_t
-
Structure containing date and time information. More...
Functions
-
void datetime_to_str (char *buf, uint buf_size, const datetime_t *t)
-
Convert a datetime_t structure to a string.
Function Documentation
◆ datetime_to_str()
void datetime_to_str | ( | char * | buf, |
uint | buf_size, | ||
const datetime_t * | t | ||
) |
Convert a datetime_t structure to a string.
Parameters
buf | character buffer to accept generated string |
buf_size | The size of the passed in buffer |
t | The datetime to be converted. |
pheap
Pairing Heap Implementation
pheap defines a simple pairing heap. The implementation simply tracks array indexes, it is up to the user to provide storage for heap entries and a comparison function.
NOTE: This class is not safe for concurrent usage. It should be externally protected. Furthermore if used concurrently, the caller needs to protect around their use of the returned id. For example, ph_remove_and_free_head returns the id of an element that is no longer in the heap. The user can still use this to look at the data in their companion array, however obviously further operations on the heap may cause them to overwrite that data as the id may be reused on subsequent operations
queue
Functions
-
void queue_init_with_spinlock (queue_t *q, uint element_size, uint element_count, uint spinlock_num)
-
Initialise a queue with a specific spinlock for concurrency protection.
-
static void queue_init (queue_t *q, uint element_size, uint element_count)
-
Initialise a queue, allocating a (possibly shared) spinlock.
-
void queue_free (queue_t *q)
-
Destroy the specified queue.
-
static uint queue_get_level_unsafe (queue_t *q)
-
Unsafe check of level of the specified queue.
-
static uint queue_get_level (queue_t *q)
-
Check of level of the specified queue.
-
static bool queue_is_empty (queue_t *q)
-
Check if queue is empty.
-
static bool queue_is_full (queue_t *q)
-
Check if queue is full.
-
bool queue_try_add (queue_t *q, const void *data)
-
Non-blocking add value queue if not full.
-
bool queue_try_remove (queue_t *q, void *data)
-
Non-blocking removal of entry from the queue if non empty.
-
bool queue_try_peek (queue_t *q, void *data)
-
Non-blocking peek at the next item to be removed from the queue.
-
void queue_add_blocking (queue_t *q, const void *data)
-
Blocking add of value to queue.
-
void queue_remove_blocking (queue_t *q, void *data)
-
Blocking remove entry from queue.
-
void queue_peek_blocking (queue_t *q, void *data)
-
Blocking peek at next value to be removed from queue.
Detailed Description
Multi-core and IRQ safe queue implementation.
Note that this queue stores values of a specified size, and pushed values are copied into the queue
Function Documentation
◆ queue_add_blocking()
void queue_add_blocking | ( | queue_t * | q, |
const void * | data | ||
) |
Blocking add of value to queue.
Parameters
q | Pointer to a queue_t structure, used as a handle |
data | Pointer to value to be copied into the queue |
If the queue is full this function will block, until a removal happens on the queue
◆ queue_get_level()
|
inlinestatic |
Check of level of the specified queue.
Parameters
q | Pointer to a queue_t structure, used as a handle |
Returns
Number of entries in the queue
◆ queue_get_level_unsafe()
|
inlinestatic |
Unsafe check of level of the specified queue.
Parameters
q | Pointer to a queue_t structure, used as a handle |
Returns
Number of entries in the queue
This does not use the spinlock, so may return incorrect results if the spin lock is not externally locked
◆ queue_init()
|
inlinestatic |
Initialise a queue, allocating a (possibly shared) spinlock.
Parameters
q | Pointer to a queue_t structure, used as a handle |
element_size | Size of each value in the queue |
element_count | Maximum number of entries in the queue |
◆ queue_init_with_spinlock()
void queue_init_with_spinlock | ( | queue_t * | q, |
uint | element_size, | ||
uint | element_count, | ||
uint | spinlock_num | ||
) |
Initialise a queue with a specific spinlock for concurrency protection.
Parameters
q | Pointer to a queue_t structure, used as a handle |
element_size | Size of each value in the queue |
element_count | Maximum number of entries in the queue |
spinlock_num | The spin ID used to protect the queue |
◆ queue_is_empty()
|
inlinestatic |
Check if queue is empty.
Parameters
q | Pointer to a queue_t structure, used as a handle |
Returns
true if queue is empty, false otherwise
This function is interrupt and multicore safe.
◆ queue_is_full()
|
inlinestatic |
Check if queue is full.
Parameters
q | Pointer to a queue_t structure, used as a handle |
Returns
true if queue is full, false otherwise
This function is interrupt and multicore safe.
◆ queue_peek_blocking()
void queue_peek_blocking | ( | queue_t * | q, |
void * | data | ||
) |
Blocking peek at next value to be removed from queue.
Parameters
q | Pointer to a queue_t structure, used as a handle |
data | Pointer to the location to receive the peeked value |
If the queue is empty function will block until a value is added
◆ queue_remove_blocking()
void queue_remove_blocking | ( | queue_t * | q, |
void * | data | ||
) |
Blocking remove entry from queue.
Parameters
q | Pointer to a queue_t structure, used as a handle |
data | Pointer to the location to receive the removed value |
If the queue is empty this function will block until a value is added.
◆ queue_try_add()
bool queue_try_add | ( | queue_t * | q, |
const void * | data | ||
) |
Non-blocking add value queue if not full.
Parameters
q | Pointer to a queue_t structure, used as a handle |
data | Pointer to value to be copied into the queue |
Returns
true if the value was added
If the queue is full this function will return immediately with false, otherwise the data is copied into a new value added to the queue, and this function will return true.
◆ queue_try_peek()
bool queue_try_peek | ( | queue_t * | q, |
void * | data | ||
) |
Non-blocking peek at the next item to be removed from the queue.
Parameters
q | Pointer to a queue_t structure, used as a handle |
data | Pointer to the location to receive the peeked value |
Returns
true if there was a value to peek
If the queue is not empty this function will return immediately with true with the peeked entry copied into the location specified by the data parameter, otherwise the function will return false.
◆ queue_try_remove()
bool queue_try_remove | ( | queue_t * | q, |
void * | data | ||
) |
Non-blocking removal of entry from the queue if non empty.
Parameters
q | Pointer to a queue_t structure, used as a handle |
data | Pointer to the location to receive the removed value |
Returns
true if a value was removed
If the queue is not empty function will copy the removed value into the location provided and return immediately with true, otherwise the function will return immediately with false.