Here are the notable changes.
Fix weekend recognition for real-time quote endpoints
Indicate errors in batch query responses
Limit each Batch API call to 2,000 queries
Exclude system datasets from max dataset check. See Pricing for your plan’s maximum Number of Datasets.
In dataset JSON schema, replace
smartLinkAttributes. See Create a Dataset with the API.
When UTP is not authorized, specific fields are not returned from Quote, Intraday Prices, and OHLC endpoints. See Get Nasdaq-listed Stock Data.
Perform symbology lookup for aliased columns.
Add edit-on-GitHub icon for all docs site articles.
Fix Core dataset Open Docs link.
In credit use page, show user-visible IDs of deleted datasets.
Disable the sql-query Data API endpoint.
Premium Data is no longer available for purchase.
Disable but show paid Core datasets to free users.
Forbid dashes in workspace names.
Reduce initial console load time.
Handle Paygo credit if needed at upgrade
Show data ingestion progress in dataset overviews
Speed up time series metadata API call
Fix schema index UI
Add missing routes to GET /openapi-doc response
Deprecate the Sandbox testing environment
For a dataset’s Share API Docs token selector, show only tokens that have access to that dataset
Detect updates to time-series-inventory metadata, even when the dataset has not records
Return an error code in the message when an entity can’t be created
Fix workspace creation on upgrades from legacy plans
Fix upgrade from legacy Individual plan
Disable Cloud Cache in Apperate
Enable free Apperate plan tiers to create data sources, schedules, and credentials
Increase Minutebar transaction processing
Fix creating a dataset without any data source
Fix creating a dataset manually
Refactor Minutebar to consume less resources
Fix undefined data types in dataset API response attribute descriptions
Auto-enable pay-as-you-go in Apperate
Fix Core dataset API docs rendering speed
Restrict dataset names to alpha-numeric characters and underscores
Show console error message when max number of datasets reached
Add descriptions for all Core datasets
Fix dataset record counts
Provide one-step dataset creation–Apperate infers the schema
Support renaming datasets
Deconflict references to API docs for core and private datasets with the same name
Mark Core dataset indexed attributes in Core datasets sidebar
Fix displaying attribute types in datasets sidebar
Deprecate the internal “platform” dataset
Fix propagating indexed columns from nested views
Correct storage limit to 5gb
If you click an S3 bucket file name, select that file for ingestion
If an S3 data source object isnt found, indicate the missing bucket or file, including its name path
List each workspace’s datasets
If the user exceeds the storage limit, halt data ingestion jobs and print the amount of data currently stored
Keep auto-generated data sources from creating datasets
Fix ingestion schedule pause/re-enable functionality
List bucket contents when creating dataset from an S3 bucket
Add ability to create credentials in the Credentials dropdown
Add a dataset export mechanism
Make indexed schema properties required; allow all others to be null
Go to the create dataset page immediately upon creating a workspace
Add an API to page through S3 bucket listings
_systema reserved name prefix
Map the symbology property to the primary index property
Add APIs to generate schemas from remote data sources
Save console messages to the log stream
Add aliases as suggested words in the SQL editor
Validate schemas during dataset generation
Fix insert row on the DATASET → Database page
DISTINCTin SQL queries
Support changing the dataset property associated with IEX Cloud’s metadata graph.
Support querying datasets that have hyphens in their names
Generate JSON schema for SQL view datasets
Support sampling JSON data with JSONPath and displaying in the UI. For details, go to Access Nested JSON Data.
Make table lookup case-insensitive
Add SQL support for column aliases. Try it in your dataset Databse page’s SQL editor.
GROUP BYin SQL queries
Get new/modified dataset data by calling
GET /data/:workspace/:id/:key?/:subkey?specifying a previous query ID using the
queryIdquery parameter. See
Support synchronous ingestion using “wait” query parameter. You can ingest data synchronously when calling the
POST /data/:workspace/:idendpoint by setting the
waitquery parameter to
true. See Ingest data.
Add API endpoint to GET log messages. You can fetch your logs by executing the
GET /logs/:workspace. endpoint. See Get logs.
Opt-in to map a property to our financial identifier metadata graph. See Get Started with an Example Dataset.