Modes and Inputs
GenSearch is AlphaSense's AI-powered research tool that lets you ask natural language questions and receive comprehensive, source-backed answers drawn from AlphaSense's extensive content library. GenSearch operates through a GraphQL API and offers four distinct modes, each designed for a different depth of analysis and response time. You initiate a query with a mutation, then poll for results until the response is complete.
Mode Comparison
| Mode | Credits | Response Time | Best For |
|---|---|---|---|
fast | 10 credits | ~30s | Quick answers, real-time queries, simple lookups |
auto | 10 credits | ~30-90s | Recommended default — automatically balances speed and depth |
thinkLonger | 25 credits | ~60-90s | Deeper analysis, nuanced questions, multi-factor comparisons |
deepResearch | 100 credits | ~12-15min | Comprehensive research reports, detailed competitive analysis, investment memos |
Authentication
All GenSearch requests require authentication. First obtain a bearer token, then include it alongside your API key and client ID in every request.
- Python
- JavaScript
- cURL
import os
import requests
auth_response = requests.post( "https://api.alpha-sense.com/auth", headers={ "x-api-key":
os.environ["ALPHASENSE_API_KEY"], "Content-Type": "application/x-www-form-urlencoded", }, data={
"grant_type": "password", "username": os.environ["ALPHASENSE_EMAIL"], "password":
os.environ["ALPHASENSE_PASSWORD"], "client_id": os.environ["ALPHASENSE_CLIENT_ID"], "client_secret":
os.environ["ALPHASENSE_CLIENT_SECRET"], }, )
token = auth_response.json()["access_token"]
const authResponse = await fetch("https://api.alpha-sense.com/auth", {
method: "POST",
headers: {
"x-api-key": process.env.ALPHASENSE_API_KEY,
"Content-Type": "application/x-www-form-urlencoded",
},
body: new URLSearchParams({
grant_type: "password",
username: process.env.ALPHASENSE_EMAIL,
password: process.env.ALPHASENSE_PASSWORD,
client_id: process.env.ALPHASENSE_CLIENT_ID,
client_secret: process.env.ALPHASENSE_CLIENT_SECRET,
}),
});
const { access_token: token } = await authResponse.json();
curl --request POST 'https://api.alpha-sense.com/auth' \
--header "x-api-key: $ALPHASENSE_API_KEY" \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=password' \
--data-urlencode "username=$ALPHASENSE_EMAIL" \
--data-urlencode "password=$ALPHASENSE_PASSWORD" \
--data-urlencode "client_id=$ALPHASENSE_CLIENT_ID" \
--data-urlencode "client_secret=$ALPHASENSE_CLIENT_SECRET"
GenSearchInput Schema
Before building a request, here is the full shape of the input object accepted by every
GenSearch mode. Only prompt is required — everything else is optional.
variables = {
"input": {
# Required: your search query
"prompt": "Your search question here",
# Optional: focus on specific documents (AskInDoc)
# Cannot be combined with "filters" — use one or the other
"documents": [
{"id": "document-id-here"}
],
# Optional: narrow your search results
"filters": {
"sources": {"ids": ["source-id"]},
"industries": ["401020"],
"expertInsightsFilters": {
"analystPerspectives": ["Investor-Led (Sell-Side)"],
"expertPerspectives": ["Medical Professional"],
"expertTranscriptType": ["Company Deep-Dive"]
},
"documentAuthors": ["Author Name"],
"date": {
"customRange": {"from": "2025-01-01", "to": "2025-06-30"},
# OR use a preset instead:
# "preset": "LAST_90_DAYS"
},
"countries": ["US", "CA"],
"companies": {
"include": ["AAPL", "MSFT"],
# OR use a watchlist instead:
# "watchlists": ["watchlist-id"]
}
},
# Optional: include web search results
"useWebSearch": True
}
}
You can use documents (AskInDoc) or filters, but not both in the same request.
Key rules:
promptis the only required field- Within
filters, combine as many fields as you want — they use AND logic (every filter narrows the results further) - For
date, use eithercustomRangeorpreset, not both companies.includeandcompanies.watchlistscannot be combined — use one or the otheruseWebSearchlives at theinputlevel, not insidefilters
Search Filters
Each filter can be used on its own or combined with others. For lookup queries that return the IDs and codes used below, see Utility APIs.
Source Filters
Filter results by document source type — broker research, SEC filings, earnings transcripts, news, and more.
"filters": {
"sources": {"ids": ["31019"]} # Broker Research
}
Look up source IDs with the filingsTypesV3 query. See Utility APIs — Source Types.
Industry Filters (GICS)
Narrow results to specific industries using GICS codes.
"filters": {
"industries": ["401020"] # Insurance
}
Look up GICS codes with the documentIndustries query. See Utility APIs — Industry Codes.
Expert Insights Filters
Filter within AlphaSense Expert Insights content using three sub-fields:
analystPerspectives— type of analyst viewpoint (e.g.,"Investor-Led (Sell-Side)")expertPerspectives— type of expert (e.g.,"Medical Professional")expertTranscriptType— transcript format (e.g.,"Company Deep-Dive")
"filters": {
"expertInsightsFilters": {
"expertPerspectives": ["Medical Professional"],
"expertTranscriptType": ["Company Deep-Dive"]
}
}
See Utility APIs — Expert Insights Filters for available values.
Document Author Filters
Filter by the author of documents uploaded to the AlphaSense platform. This applies to user-uploaded content (internal research, reports your team has added, etc.).
"filters": {
"documentAuthors": ["Jane Smith"]
}
Date Filters
Restrict results to a time window. Use either a preset or a custom range, not both.
Presets:
| Preset Value | Range |
|---|---|
LAST_24_HOURS | Past 24 hours |
LAST_7_DAYS | Past 7 days |
LAST_30_DAYS | Past 30 days |
LAST_90_DAYS | Past 90 days |
LAST_6_MONTHS | Past 6 months |
LAST_12_MONTHS | Past 12 months |
LAST_18_MONTHS | Past 18 months |
LAST_2_YEARS | Past 2 years |
# Preset
"filters": {
"date": {"preset": "LAST_90_DAYS"}
}
# Custom range (YYYY-MM-DD)
"filters": {
"date": {"customRange": {"from": "2025-01-01", "to": "2025-03-31"}}
}
See Utility APIs — Date Presets for the full enum reference.
Country Filters
Filter by country using uppercase 2-letter ISO country codes. Use "US*" for US non-domicile
entities.
"filters": {
"countries": ["US", "GB", "CA"]
}
Look up country codes with the documentCountryCodes query. See Utility APIs — Country Codes.
Company Filters
Focus results on specific companies by ticker/identifier, or by a saved watchlist.
# By ticker or identifier
"filters": {
"companies": {"include": ["AAPL", "MSFT", "GOOGL"]}
}
# By watchlist
"filters": {
"companies": {"watchlists": ["your-watchlist-id"]}
}
You cannot combine include and watchlists in the same request — use one or the other.
Look up company identifiers with the companies query and watchlist IDs with the user query.
See Utility APIs — Company Lookup and
Utility APIs — User Watchlists.
AskInDoc
Point GenSearch at one or more specific documents so the response is grounded entirely in those docs. This is useful when you have already found a document (such as a 10-K, earnings transcript, or research report) and want to ask targeted questions about it.
When using documents, do not include the filters object. They cannot be combined in the
same request.
Single document:
variables = {
"input": {
"prompt": "What are the key risk factors mentioned in this filing?",
"documents": [
{"id": "abc123-document-id"}
]
}
}
Multiple documents:
variables = {
"input": {
"prompt": "Compare the revenue guidance across these earnings calls.",
"documents": [
{"id": "earnings-call-q1-id"},
{"id": "earnings-call-q2-id"},
{"id": "earnings-call-q3-id"}
]
}
}
Document IDs can be found using the Document Search API. See Utility APIs — Document Search for a lookup example.
Web Search
Set useWebSearch to true to include public web results alongside AlphaSense content. This
field lives at the input level, not inside filters.
variables = {
"input": {
"prompt": "What are the latest developments in quantum computing?",
"useWebSearch": True
}
}
You can combine web search with filters:
variables = {
"input": {
"prompt": "Recent moves by TSMC in Arizona",
"filters": {
"companies": {"include": ["TSM"]},
"date": {"preset": "LAST_30_DAYS"}
},
"useWebSearch": True
}
}
Combining Multiple Filters
Filters use AND logic — every filter you add narrows the results further. Combine as many as you need in a single request.
- Python
- JavaScript
# Example: sell-side analyst coverage of Apple's AI strategy in the last 6 months
variables = {
"input": {
"prompt": "What is Apple's AI and machine learning strategy?",
"filters": {
"companies": {"include": ["AAPL"]},
"sources": {"ids": ["31019"]},
"date": {"preset": "LAST_6_MONTHS"},
"expertInsightsFilters": {
"analystPerspectives": ["Investor-Led (Sell-Side)"]
}
}
}
}
// Example: sell-side analyst coverage of Apple's AI strategy in the last 6 months
const variables = {
input: {
prompt: "What is Apple's AI and machine learning strategy?",
filters: {
companies: { include: ["AAPL"] },
sources: { ids: ["31019"] },
date: { preset: "LAST_6_MONTHS" },
expertInsightsFilters: {
analystPerspectives: ["Investor-Led (Sell-Side)"],
},
},
},
};
- Python
- JavaScript
# Example: supply chain analysis for semiconductors in US, China, and Taiwan
variables = {
"input": {
"prompt": "Supply chain disruptions and their impact on margins",
"filters": {
"industries": ["45301020"],
"date": {
"customRange": {
"from": "2025-06-01",
"to": "2025-12-31"
}
},
"countries": ["US", "CN", "TW"]
}
}
}
// Example: supply chain analysis for semiconductors in US, China, and Taiwan
const variables = {
input: {
prompt: "Supply chain disruptions and their impact on margins",
filters: {
industries: ["45301020"],
date: {
customRange: {
from: "2025-06-01",
to: "2025-12-31",
},
},
countries: ["US", "CN", "TW"],
},
},
};
Fast Mode
Fast mode is optimized for speed. Use it when you need a quick, concise answer and latency matters more than exhaustive depth. At 10 credits per query and approximately 30 seconds of response time, it is ideal for real-time lookups, simple factual questions, and lightweight integrations.
Mutation
- Python
- JavaScript
- cURL
import os
import requests
url = "https://api.alpha-sense.com/gql" headers = { "x-api-key": os.environ["ALPHASENSE_API_KEY"],
"clientid": os.environ["ALPHASENSE_CLIENT_ID"], "Authorization": f"Bearer {token}", "Content-Type":
"application/json", }
mutation = """ mutation GenSearchFast($input: GenSearchInput!) { genSearch { fast(input: $input) {
id } } } """
variables = { "input": { "prompt": "What was Apple's revenue in Q4 2025?" } }
response = requests.post( url, headers=headers, json={"query": mutation, "variables": variables}, )
conversation_id = response.json()["data"]["genSearch"]["fast"]["id"] print(f"Conversation ID:
{conversation_id}")
const url = "https://api.alpha-sense.com/gql";
const headers = {
"x-api-key": process.env.ALPHASENSE_API_KEY,
clientid: process.env.ALPHASENSE_CLIENT_ID,
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
};
const mutation = `
mutation GenSearchFast($input: GenSearchInput!) {
genSearch {
fast(input: $input) {
id
}
}
}
`;
const variables = {
input: {
prompt: "What was Apple's revenue in Q4 2025?",
},
};
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify({ query: mutation, variables }),
});
const result = await response.json();
const conversationId = result.data.genSearch.fast.id;
console.log(`Conversation ID: ${conversationId}`);
curl --request POST 'https://api.alpha-sense.com/gql' \
--header "x-api-key: $ALPHASENSE_API_KEY" \
--header "clientid: $ALPHASENSE_CLIENT_ID" \
--header "Authorization: Bearer $TOKEN" \
--header 'Content-Type: application/json' \
--data '{
"query": "mutation GenSearchFast($input: GenSearchInput!) { genSearch { fast(input: $input) { id } } }",
"variables": {
"input": {
"prompt": "What was Apple'\''s revenue in Q4 2025?"
}
}
}'
You can add filters, documents, or useWebSearch to the input alongside prompt. See Search Filters.
Auto Mode
Auto mode is the recommended default for most use cases. It automatically selects the optimal
depth of analysis based on your query, balancing speed and thoroughness. At 10 credits per query
and approximately 30-90 seconds of response time, it is ideal when you want high-quality results
without having to choose between fast and thinkLonger manually.
Mutation
- Python
- JavaScript
- cURL
import os
import requests
url = "https://api.alpha-sense.com/gql" headers = { "x-api-key": os.environ["ALPHASENSE_API_KEY"],
"clientid": os.environ["ALPHASENSE_CLIENT_ID"], "Authorization": f"Bearer {token}", "Content-Type":
"application/json", }
mutation = """ mutation GenSearchAuto($input: GenSearchInput!) { genSearch { auto(input: $input) {
id } } } """
variables = { "input": { "prompt": "What are the key trends driving semiconductor demand in 2025?" } }
response = requests.post( url, headers=headers, json={"query": mutation, "variables": variables}, )
conversation_id = response.json()["data"]["genSearch"]["auto"]["id"] print(f"Conversation ID:
{conversation_id}")
const url = "https://api.alpha-sense.com/gql";
const headers = {
"x-api-key": process.env.ALPHASENSE_API_KEY,
clientid: process.env.ALPHASENSE_CLIENT_ID,
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
};
const mutation = `
mutation GenSearchAuto($input: GenSearchInput!) {
genSearch {
auto(input: $input) {
id
}
}
}
`;
const variables = {
input: {
prompt: "What are the key trends driving semiconductor demand in 2025?",
},
};
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify({ query: mutation, variables }),
});
const result = await response.json();
const conversationId = result.data.genSearch.auto.id;
console.log(`Conversation ID: ${conversationId}`);
curl --request POST 'https://api.alpha-sense.com/gql' \
--header "x-api-key: $ALPHASENSE_API_KEY" \
--header "clientid: $ALPHASENSE_CLIENT_ID" \
--header "Authorization: Bearer $TOKEN" \
--header 'Content-Type: application/json' \
--data '{
"query": "mutation GenSearchAuto($input: GenSearchInput!) { genSearch { auto(input: $input) { id } } }",
"variables": {
"input": {
"prompt": "What are the key trends driving semiconductor demand in 2025?"
}
}
}'
You can add filters, documents, or useWebSearch to the input alongside prompt. See Search Filters.
Think Longer Mode
Think Longer mode provides a deeper level of analysis. It spends more time reasoning through the question, cross-referencing multiple sources, and producing a more nuanced response. At 25 credits per query and approximately 60-90 seconds of response time, it is well-suited for multi-factor comparisons, strategic questions, and situations where accuracy and completeness outweigh speed.
Mutation
- Python
- JavaScript
- cURL
import os
import requests
url = "https://api.alpha-sense.com/gql" headers = { "x-api-key": os.environ["ALPHASENSE_API_KEY"],
"clientid": os.environ["ALPHASENSE_CLIENT_ID"], "Authorization": f"Bearer {token}", "Content-Type":
"application/json", }
mutation = """ mutation GenSearchThinkLonger($input: GenSearchInput!) { genSearch {
thinkLonger(input: $input) { id } } } """
variables = { "input": { "prompt": "Compare the competitive positioning of NVIDIA and AMD in the
data center GPU market over the past two quarters." } }
response = requests.post( url, headers=headers, json={"query": mutation, "variables": variables}, )
conversation_id = response.json()["data"]["genSearch"]["thinkLonger"]["id"] print(f"Conversation ID:
{conversation_id}")
const url = "https://api.alpha-sense.com/gql";
const headers = {
"x-api-key": process.env.ALPHASENSE_API_KEY,
clientid: process.env.ALPHASENSE_CLIENT_ID,
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
};
const mutation = `
mutation GenSearchThinkLonger($input: GenSearchInput!) {
genSearch {
thinkLonger(input: $input) {
id
}
}
}
`;
const variables = {
input: {
prompt:
"Compare the competitive positioning of NVIDIA and AMD in the data center GPU market over the past two quarters.",
},
};
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify({ query: mutation, variables }),
});
const result = await response.json();
const conversationId = result.data.genSearch.thinkLonger.id;
console.log(`Conversation ID: ${conversationId}`);
curl --request POST 'https://api.alpha-sense.com/gql' \
--header "x-api-key: $ALPHASENSE_API_KEY" \
--header "clientid: $ALPHASENSE_CLIENT_ID" \
--header "Authorization: Bearer $TOKEN" \
--header 'Content-Type: application/json' \
--data '{
"query": "mutation GenSearchThinkLonger($input: GenSearchInput!) { genSearch { thinkLonger(input: $input) { id } } }",
"variables": {
"input": {
"prompt": "Compare the competitive positioning of NVIDIA and AMD in the data center GPU market over the past two quarters."
}
}
}'
You can add filters, documents, or useWebSearch to the input alongside prompt. See Search Filters.
Deep Research Mode
Deep Research mode produces comprehensive, report-grade output. It performs extensive source gathering, synthesizes information across many documents, and returns a structured, detailed research report. At 100 credits per query and approximately 12-15 minutes of response time, it is designed for investment memos, thorough competitive analyses, and any scenario where you need the most complete answer possible.
Mutation
- Python
- JavaScript
- cURL
import os
import requests
url = "https://api.alpha-sense.com/gql" headers = { "x-api-key": os.environ["ALPHASENSE_API_KEY"],
"clientid": os.environ["ALPHASENSE_CLIENT_ID"], "Authorization": f"Bearer {token}", "Content-Type":
"application/json", }
mutation = """ mutation GenSearchDeepResearch($input: GenSearchInput!) { genSearch {
deepResearch(input: $input) { id } } } """
variables = { "input": { "prompt": "Provide a comprehensive analysis of the electric vehicle market:
key players, supply chain risks, regulatory tailwinds, and projected growth through 2027." } }
response = requests.post( url, headers=headers, json={"query": mutation, "variables": variables}, )
conversation_id = response.json()["data"]["genSearch"]["deepResearch"]["id"] print(f"Conversation
ID: {conversation_id}")
const url = "https://api.alpha-sense.com/gql";
const headers = {
"x-api-key": process.env.ALPHASENSE_API_KEY,
clientid: process.env.ALPHASENSE_CLIENT_ID,
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
};
const mutation = `
mutation GenSearchDeepResearch($input: GenSearchInput!) {
genSearch {
deepResearch(input: $input) {
id
}
}
}
`;
const variables = {
input: {
prompt:
"Provide a comprehensive analysis of the electric vehicle market: key players, supply chain risks, regulatory tailwinds, and projected growth through 2027.",
},
};
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify({ query: mutation, variables }),
});
const result = await response.json();
const conversationId = result.data.genSearch.deepResearch.id;
console.log(`Conversation ID: ${conversationId}`);
curl --request POST 'https://api.alpha-sense.com/gql' \
--header "x-api-key: $ALPHASENSE_API_KEY" \
--header "clientid: $ALPHASENSE_CLIENT_ID" \
--header "Authorization: Bearer $TOKEN" \
--header 'Content-Type: application/json' \
--data '{
"query": "mutation GenSearchDeepResearch($input: GenSearchInput!) { genSearch { deepResearch(input: $input) { id } } }",
"variables": {
"input": {
"prompt": "Provide a comprehensive analysis of the electric vehicle market: key players, supply chain risks, regulatory tailwinds, and projected growth through 2027."
}
}
}'
You can add filters, documents, or useWebSearch to the input alongside prompt. See Search Filters.
Polling for Results
After initiating any GenSearch mode, you receive a conversation ID. Use this ID to poll for results. The polling query is the same regardless of which mode you used.
Polling Query
query Query($conversationId: String!) {
genSearch {
conversation(id: $conversationId) {
id
markdown
progress
error {
code
}
}
}
}
Full Polling Implementation
- Python
- JavaScript
- cURL
import os
import time
import requests
url = "https://api.alpha-sense.com/gql" headers = { "x-api-key": os.environ["ALPHASENSE_API_KEY"],
"clientid": os.environ["ALPHASENSE_CLIENT_ID"], "Authorization": f"Bearer {token}", "Content-Type":
"application/json", }
poll_query = """ query Query($conversationId: String!) { genSearch { conversation(id:
$conversationId) { id markdown progress error { code } } } } """
def poll_for_results(conversation_id, interval=3, timeout=600): """Poll until the GenSearch
conversation completes or times out.""" start_time = time.time()
while time.time() - start_time < timeout:
response = requests.post(
url,
headers=headers,
json={
"query": poll_query,
"variables": {"conversationId": conversation_id},
},
)
data = response.json()["data"]["genSearch"]["conversation"]
# Check for errors
if data.get("error"):
raise Exception(f"GenSearch error: {data['error']['code']}")
progress = data.get("progress", 0.0)
print(f"Progress: {progress:.0%}")
# progress reaches 1.0 when the response is complete
if progress >= 1.0:
return data["markdown"]
time.sleep(interval)
raise TimeoutError("Polling timed out waiting for GenSearch results.")
# Usage: pass the conversation_id from any mode's mutation response
result_markdown = poll_for_results(conversation_id) print(result_markdown)
const url = "https://api.alpha-sense.com/gql";
const headers = {
"x-api-key": process.env.ALPHASENSE_API_KEY,
clientid: process.env.ALPHASENSE_CLIENT_ID,
Authorization: `Bearer ${token}`,
"Content-Type": "application/json",
};
const pollQuery = `
query Query($conversationId: String!) {
genSearch {
conversation(id: $conversationId) {
id
markdown
progress
error {
code
}
}
}
}
`;
async function pollForResults(conversationId, interval = 3000, timeout = 900000) {
const startTime = Date.now();
while (Date.now() - startTime < timeout) {
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify({
query: pollQuery,
variables: { conversationId },
}),
});
const json = await response.json();
const data = json.data.genSearch.conversation;
// Check for errors
if (data.error) {
throw new Error(`GenSearch error: ${data.error.code}`);
}
const progress = data.progress ?? 0.0;
console.log(`Progress: ${(progress * 100).toFixed(0)}%`);
// progress reaches 1.0 when the response is complete
if (progress >= 1.0) {
return data.markdown;
}
await new Promise((resolve) => setTimeout(resolve, interval));
}
throw new Error("Polling timed out waiting for GenSearch results.");
}
// Usage: pass the conversationId from any mode's mutation response
const resultMarkdown = await pollForResults(conversationId);
console.log(resultMarkdown);
# Poll for results by replacing $CONVERSATION_ID with the id from the mutation response.
# Repeat this call until "progress" reaches 1.0.
curl --request POST 'https://api.alpha-sense.com/gql' \
--header "x-api-key:
$ALPHASENSE_API_KEY" \
--header "clientid: $ALPHASENSE_CLIENT_ID" \
--header "Authorization: Bearer $TOKEN" \
--header 'Content-Type: application/json' \
--data '{
"query": "query Query($conversationId:
String!) { genSearch { conversation(id:
$conversationId) { id markdown progress error { code } } } }",
"variables": {
"conversationId": "'"$CONVERSATION_ID"'"
} }'
Progress Tracking
The progress field returned by the polling query is a floating-point number that ranges from 0.0 to 1.0:
| Progress Value | Meaning |
|---|---|
0.0 | The request has been received and queued |
0.0 < progress < 1.0 | The response is being generated; partial results may be available in markdown |
1.0 | The response is complete; the final result is in markdown |
Recommended polling intervals by mode:
- fast -- poll every 2-3 seconds (expected completion in ~30 seconds)
- auto -- poll every 3-5 seconds (expected completion in ~30-90 seconds)
- thinkLonger -- poll every 5 seconds (expected completion in ~60-90 seconds)
- deepResearch -- poll every 10 seconds (expected completion in ~12-15 minutes)
While the response is still being generated (progress < 1.0), the markdown field may contain partial content. You can display this to the user as a progressive loading experience or wait for progress to reach 1.0 before rendering the full response.
Response Format
All GenSearch modes return their results in the markdown field as standard Markdown text with inline citations. Citations follow the pattern:
[[N • Source Name]]
where N is a numeric reference and • is a bullet separator, followed by the Source Name that identifies the document from which the information was drawn. For example:
Apple reported Q4 2025 revenue of $94.9 billion, a 6% year-over-year increase
[[1 • Earnings ]]. The growth was primarily driven by
strong performance in the Services segment [[2 • Broker Research]].
Each citation links back to a specific source document in AlphaSense's content library. For details on how to programmatically parse and render these citations, see the Response Parsing guide.
When to Use Which Mode
Choosing the right mode depends on the nature of your question, your latency requirements, and how many credits you want to spend.
Use fast when:
- You need a quick factual answer (e.g., "What was Tesla's Q3 revenue?")
- The query is part of a real-time user-facing interface where response time matters
- You are performing many lookups in batch and want to conserve credits
- The question has a straightforward, well-scoped answer
Use auto when:
- You want the best balance of speed and depth without choosing a mode manually
- You are building a general-purpose integration and want a single default mode
- The query complexity varies and you want the system to adapt automatically
- You want high-quality results at the same credit cost as
fast
Use thinkLonger when:
- The question requires comparing multiple data points or companies
- You need a more nuanced answer that weighs different perspectives
- Accuracy and depth are more important than sub-minute response time
- Examples: "How do margins at Starbucks compare to Dunkin' over the last four quarters?" or "What are analysts saying about the impact of rising interest rates on REITs?"
Use deepResearch when:
- You need a comprehensive, report-style answer covering an entire topic
- The question spans multiple dimensions (market sizing, competitive landscape, regulatory environment, etc.)
- You are generating content for investment memos, board presentations, or strategic planning
- You are willing to wait several minutes and spend more credits for the most thorough response
- Examples: "Provide a full competitive analysis of the cloud infrastructure market" or "What are the key risks and opportunities in the global semiconductor supply chain?"
Quick Decision Guide
Do you need the absolute fastest response possible?
YES --> Use fast
NO --> Do you need a comprehensive, report-grade output?
YES --> Use deepResearch
NO --> Do you specifically need extended reasoning for a complex comparison?
YES --> Use thinkLonger
NO --> Use auto (recommended default)
- Streaming: For real-time progressive rendering instead of polling, see the Streaming guide.
- Response Parsing: To learn how to parse citations and structure the Markdown response for display, see the Response Parsing guide.
- Utility APIs: Look up filter values (source IDs, GICS codes, company tickers, etc.) at Utility APIs.