- 语言大模型
- 图片生成
- 统一接 口
- GPT-Image-1
- DALL.E
- Stability.ai
- Midjourney
- Midjourney-Relax
- 302.AI
- SDXL(图片生成)
- SDXL-Lora(图片生成-Lora)
- SDXL-Lightning(快速图片生成)
- SDXL-Lightning-V2(快速图片生成V2)
- SD3(图片生成-SD3)
- Aura-Flow(图片生成)
- Kolors(图片生成-可灵)
- Kolors(参考图片生成-可灵)
- QRCode(艺术二维码生成)
- Lora(图片生成-Lora)
- Lora(获取任务结果)
- SD-3.5-Large(图片生成)
- SD-3.5-Large-Turbo(图片生成)
- SD-3.5-Medium(图片生成)
- Lumina-Image-V2(图片生成)
- Playground-v25(图片生成)
- Omnigen-V1(图片生成)
- Glif
- Flux
- Ideogram
- Recraft
- Luma
- Doubao即梦
- Minimax海螺
- 智谱
- Baidu百度
- Hidream
- Bagel
- 图片处理
- 302.AI-ComfyUI
- 302.AI
- Upscale(图片放大)
- Upscale-V2(图片放大V2)
- Upscale-V3(图片放大V3)
- Upscale-V4(图片放大V4)
- Super-Upscale(超级图片放大)
- Super-Upscale-V2(超级图片放大V2)
- Face-upscale(人像照片放大)
- Colorize(黑白照片上色)
- Colorize(黑白照片上色V2)
- Removebg(背景消除)
- Removebg-V2(背景消除V2)
- Removebg-V3(背景消除V3)
- Inpaint(图片修改)
- Erase(物体消除)
- Face-to-many(人像照片风格化)
- Llava(图像识别)
- Relight(二次打光)
- Relight-background(二次打光背景合成)
- Relight-V2(二次打光-V2)
- Face-swap-V2(AI换脸V2)
- Fetch(获取任务结果)
- HtmltoPng(HTML转PNG格式)
- SvgToPng(SVG转PNG格式)
- image-translate(图片翻译)
- image-translate-query(图片翻译结果)
- image-translate-redo(图片翻译修改)
- Flux-selfie(自拍照片风格化)
- Trellis(图片转3D模型)
- Pose-Transfer(人物姿态变换)
- Pose-Transfer(人物姿态变换结果)
- Virtual-Tryon(虚拟穿衣)
- Virtual-Tryon(虚拟穿衣结果)
- Denoise(AI降噪)
- Deblur(AI去模糊)
- SAM(AI生成MASK图)
- Vectorizer
- Stability.ai
- Fast Upscale(快速图片放大)
- Creative Upscale(创意图片放大)
- Conservative Upscale(保守图片放大)
- Fetch Creative Upscale(超级图片放大)
- Erase(物体消除)
- Inpaint(图片修改)
- Outpaint(图片扩展)
- Search-and-replace(内容替换)
- Search-and-recolor(内容重着色)
- Remove-background(背景消除)
- Sketch(草图转图片)
- Structure(以图生图)
- Style(风 格一致性)
- Replace-Background(更换背景)
- Stable-Fast-3D(图片转3D模型)
- Stable-Point-3D(图片转3D模型新版)
- Glif
- Clipdrop
- Recraft
- BRIA
- Remove Background(背景消除)
- Blur Background(背景模糊)
- Generate Background(背景生成)
- Erase Foreground(擦除前景)
- Eraser(物体擦除)
- Expand Image(图片扩展)
- Increase Resolution(图片放大)
- Crop(图片裁切)
- Cutout(产品图裁剪)
- Packshot(产品图特写)
- Shadow (产品图阴影)
- Scene (产品图场景生成)
- Caption(图片描述)
- Register(图片上传)
- Mask(图片分割)
- Presenter info (人脸分析)
- Modify Presenter(人脸修改)
- Delayer Image(图片转PSD)
- Flux
- Hyper3D
- Tripo3D
- FASHN
- Ideogram
- Doubao即梦
- Kling可灵
- 阶跃星辰
- Bagel
- 视频生成
- 统一接口
- 302.AI
- Stable Diffusion
- Luma AI
- Runway
- Kling 可灵
- 302格式
- Txt2Video(文生视频1.0-快速-5秒)
- Txt2Video_HQ(文生视频1.5-高清-5秒)
- Txt2Video_HQ(文生视频1.5-高清-10秒)
- Image2Video(图生视频1.0-快速-5秒)
- Image2Video(图生视频1.0-快速-10秒)
- Image2Video(图生视频1.5-快速-5秒)
- Image2Video(图生视频1.5-快速-10秒)
- Image2Video_HQ(图生视频1.5-高清-5秒)
- Image2Video_HQ(图生视频1.5-高清-10秒)
- Txt2Video(文生视频1.6-标准-5秒)
- Txt2Video(文生视频1.6-标准-10秒)
- Txt2Video(文生视频1.6-高清-5秒)
- Image2Video(图生视频1.6-标准-5秒)
- Txt2Video(文生视频1.6-高清-10秒)
- Image2Video(图生视频1.6-标准-10秒)
- Image2Video(图生视频1.6-高清-5秒)
- Image2Video(图生视频1.6-高清-10秒)
- Txt2Video(文生视频2.0-高清-5秒)
- Image2Video(图生视频2.0-高清-5秒)
- Image2Video(图生视频2.0-高清-10秒)
- Image2Video(图生视频2.1-5秒)
- Image2Video(图生视频2.1-10秒)
- Image2Video(图生视频2.1-高清-5秒)
- Image2Video(图生视频2.1-高清-10秒)
- Txt2Video(文生视频2.1-大师版-5秒)
- Txt2Video(文生视频2.1-大师版-10秒)
- Image2Video(图生视频2.1-大师版-5秒)
- Image2Video(图生视频2.1-大师版-10秒)
- Image2Video(多图参考)
- Extend_Video(视频扩展)
- Fetch(获取任务结果)
- 官方格式
- 302格式
- CogVideoX智谱
- Minimax海螺
- Pika
- PixVerse
- Genmo
- Hedra
- Haiper
- Sync.
- Lightricks
- Hunyuan混元
- Vidu
- 通义万相
- 即梦
- 硅基流动
- 昆仑万维
- Higgsfield
- 音视频处理
- 信息处理
- 统一搜索接口
- 302.AI
- 管理后台
- 信息搜索
- Xiaohongshu_Search(小红书搜索笔记)
- Xiaohongshu_Note(小红书获取笔记)
- Tiktok_Search(Tiktok搜索视频)
- Douyin_Search(抖音搜索视频)
- Twitter_Search(X搜索内容)
- Twitter_Post(X获取用户帖子)
- Twitter_User(X获取用户信息)
- Weibo_Post(微博获取用户帖子)
- Search_Video(Youtube搜索视频)
- Youtube_Info(Youtube获取视频信息)
- Youtube_Subtitles(Youtube获取字幕)
- Bilibili_Info(B站获取视频信息)
- MP_Article_List(获取微信公众号文章列表)
- MP_Article(获取微信公众号文章)
- Zhihu_AI_Search(知乎AI搜索)
- Zhihu_AI_Search(获取知乎AI搜索结果)
- Zhihu_Hot_List(知乎热榜)
- Video_Data(获取视频数据)
- 文件处理
- 代码运行
- 远程浏览器
- Tavily
- SearchAPI
- Search1API
- Exa
- 博查AI
- Doc2x
- Glif
- Jina
- DeepL
- RSSHub
- 流光卡片
- 有道
- Mistral
- Firecrawl
- RAG相关
- 工具API
- 帮助中心
Scrape Status(获取结果)
正式环境
https://api.302.ai
正式环境
https://api.302.ai
GET
https://api.302.ai
官方文档:https://docs.firecrawl.dev/api-reference/endpoint/batch-scrape-get
请求参数
Path 参数
id
string
必需
Header 参数
Authorization
string
API Key
示例值:
Bearer {{YOUR_API_KEY}}
示例代码
Shell
JavaScript
Java
Swift
Go
PHP
Python
HTTP
C
C#
Objective-C
Ruby
OCaml
Dart
R
请求示例请求示例
Shell
JavaScript
Java
Swift
curl --location --request GET 'https://api.302.ai/firecrawl/v1/batch/scrape/' \
--header 'Authorization: Bearer sk-1qN1jbvVrGuNKQe8KTmsFBU23rNla8oOCggVGW1X751uLrzc'
返回响应
🟢200成功
application/json
Body
pages
array [object {4}]
必需
index
integer
页号
markdown
string
markdown
images
array [object {6}]
必需
dimensions
object
必需
model
string
模型名称
usage_info
object
必需
pages_processed
integer
文件页数
doc_size_bytes
integer
文件大小
示例
{ "pages": [ { "index": 0, "markdown": "# LEVERAGING UNLABELED DATA TO PREDICT OUT-OF-DISTRIBUTION PERFORMANCE \n\nSaurabh Garg*<br>Carnegie Mellon University<br>sgarg2@andrew.cmu.edu<br>Sivaraman Balakrishnan<br>Carnegie Mellon University<br>sbalakri@andrew.cmu.edu<br>Zachary C. Lipton<br>Carnegie Mellon University<br>zlipton@andrew.cmu.edu\n\n## Behnam Neyshabur\n\nGoogle Research, Blueshift team\nneyshabur@google.com\n\nHanie Sedghi<br>Google Research, Brain team<br>hsedghi@google.com\n\n\n#### Abstract\n\nReal-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions that may cause performance drops. In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data. We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples for which model confidence exceeds that threshold. ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (WILDS, ImageNet, BREEDS, CIFAR, and MNIST). In our experiments, ATC estimates target performance $2-4 \\times$ more accurately than prior methods. We also explore the theoretical foundations of the problem, proving that, in general, identifying the accuracy is just as hard as identifying the optimal predictor and thus, the efficacy of any method rests upon (perhaps unstated) assumptions on the nature of the shift. Finally, analyzing our method on some toy distributions, we provide insights concerning when it works ${ }^{1}$.\n\n\n## 1 INTRODUCTION\n\nMachine learning models deployed in the real world typically encounter examples from previously unseen distributions. While the IID assumption enables us to evaluate models using held-out data from the source distribution (from which training data is sampled), this estimate is no longer valid in presence of a distribution shift. Moreover, under such shifts, model accuracy tends to degrade (Szegedy et al., 2014; Recht et al., 2019; Koh et al., 2021). Commonly, the only data available to the practitioner are a labeled training set (source) and unlabeled deployment-time data which makes the problem more difficult. In this setting, detecting shifts in the distribution of covariates is known to be possible (but difficult) in theory (Ramdas et al., 2015), and in practice (Rabanser et al., 2018). However, producing an optimal predictor using only labeled source and unlabeled target data is well-known to be impossible absent further assumptions (Ben-David et al., 2010; Lipton et al., 2018).\n\nTwo vital questions that remain are: (i) the precise conditions under which we can estimate a classifier's target-domain accuracy; and (ii) which methods are most practically useful. To begin, the straightforward way to assess the performance of a model under distribution shift would be to collect labeled (target domain) examples and then to evaluate the model on that data. However, collecting fresh labeled data from the target distribution is prohibitively expensive and time-consuming, especially if the target distribution is non-stationary. Hence, instead of using labeled data, we aim to use unlabeled data from the target distribution, that is comparatively abundant, to predict model performance. Note that in this work, our focus is not to improve performance on the target but, rather, to estimate the accuracy on the target for a given classifier.\n\n[^0]\n[^0]: * Work done in part while Saurabh Garg was interning at Google\n ${ }^{1}$ Code is available at https://github.com/saurabhgarg1996/ATC_code.", "images": [], "dimensions": { "dpi": 200, "height": 2200, "width": 1700 } }, { "index": 1, "markdown": "\n\nFigure 1: Illustration of our proposed method ATC. Left: using source domain validation data, we identify a threshold on a score (e.g. negative entropy) computed on model confidence such that fraction of examples above the threshold matches the validation set accuracy. ATC estimates accuracy on unlabeled target data as the fraction of examples with the score above the threshold. Interestingly, this threshold yields accurate estimates on a wide set of target distributions resulting from natural and synthetic shifts. Right: Efficacy of ATC over previously proposed approaches on our testbed with a post-hoc calibrated model. To obtain errors on the same scale, we rescale all errors with Average Confidence (AC) error. Lower estimation error is better. See Table 1 for exact numbers and comparison on various types of distribution shift. See Sec. 5 for details