久久福利_99r_国产日韩在线视频_直接看av的网站_中文欧美日韩_久久一

您的位置:首頁技術文章
文章詳情頁

Python根據URL地址下載文件并保存至對應目錄的實現

瀏覽:3日期:2022-07-05 13:50:15

引言

在編程中經常會遇到圖片等數據集將圖片等數據以URL形式存儲在txt文檔中,為便于后續的分析,需要將其下載下來,并按照文件夾分類存儲。本文以Github中Alexander Kim提供的圖片分類數據集為例,下載其提供的圖片樣本并分類保存

Python 3.6.5,Anaconda, VSCode

1. 下載數據集文件

建立項目文件夾,下載上述Github項目中的raw_data文件夾,并保存至項目目錄中。

Python根據URL地址下載文件并保存至對應目錄的實現

2. 獲取樣本文件位置

編寫get_doc_path.py,根據根目錄位置,獲取目錄及其子目錄所有數據集文件

import osdef get_file(root_path, all_files={}): ’’’ 遞歸函數,遍歷該文檔目錄和子目錄下的所有文件,獲取其path ’’’ files = os.listdir(root_path) for file in files: if not os.path.isdir(root_path + ’/’ + file): # not a dir all_files[file] = root_path + ’/’ + file else: # is a dir get_file((root_path+’/’+file), all_files) return all_filesif __name__ == ’__main__’: path = ’./raw_data’ print(get_file(path))

3. 下載文件

3.1 讀取url列表并

for filename, path in paths.items(): print(’reading file: {}’.format(filename)) with open(path, ’r’) as f: lines = f.readlines() url_list = [] for line in lines:url_list.append(line.strip(’n’)) print(url_list)

3.2 創建文件夾

foldername = './picture_get_by_url/pic_download/{}'.format(filename.split(’.’)[0])if not os.path.exists(folder_path): print('Selected folder not exist, try to create it.') os.makedirs(folder_path)

3.3 下載圖片

def get_pic_by_url(folder_path, lists): if not os.path.exists(folder_path): print('Selected folder not exist, try to create it.') os.makedirs(folder_path) for url in lists: print('Try downloading file: {}'.format(url)) filename = url.split(’/’)[-1] filepath = folder_path + ’/’ + filename if os.path.exists(filepath): print('File have already exist. skip') else: try:urllib.request.urlretrieve(url, filename=filepath) except Exception as e:print('Error occurred when downloading file, error message:')print(e)

4. 完整源碼

4.1 get_doc_path.py

import osdef get_file(root_path, all_files={}): ’’’ 遞歸函數,遍歷該文檔目錄和子目錄下的所有文件,獲取其path ’’’ files = os.listdir(root_path) for file in files: if not os.path.isdir(root_path + ’/’ + file): # not a dir all_files[file] = root_path + ’/’ + file else: # is a dir get_file((root_path+’/’+file), all_files) return all_filesif __name__ == ’__main__’: path = ’./raw_data’ print(get_file(path))

4.2 get_pic.py

import get_doc_pathimport osimport urllib.requestdef get_pic_by_url(folder_path, lists): if not os.path.exists(folder_path): print('Selected folder not exist, try to create it.') os.makedirs(folder_path) for url in lists: print('Try downloading file: {}'.format(url)) filename = url.split(’/’)[-1] filepath = folder_path + ’/’ + filename if os.path.exists(filepath): print('File have already exist. skip') else: try:urllib.request.urlretrieve(url, filename=filepath) except Exception as e:print('Error occurred when downloading file, error message:')print(e)if __name__ == '__main__': root_path = ’./picture_get_by_url/raw_data’ paths = get_doc_path.get_file(root_path) print(paths) for filename, path in paths.items(): print(’reading file: {}’.format(filename)) with open(path, ’r’) as f: lines = f.readlines() url_list = [] for line in lines:url_list.append(line.strip(’n’)) foldername = './picture_get_by_url/pic_download/{}'.format(filename.split(’.’)[0]) get_pic_by_url(foldername, url_list)

4.3 運行結果

執行get_pic.py當程序意外停止或再次執行時,程序會自動跳過文件夾中已下載的文件,繼續下載未下載的內容

{‘urls_drawings.txt’: ‘./picture_get_by_url/raw_data/drawings/urls_drawings.txt’, ‘urls_hentai.txt’: ‘./picture_get_by_url/raw_data/hentai/urls_hentai.txt’, ‘urls_neutral.txt’: ‘./picture_get_by_url/raw_data/neutral/urls_neutral.txt’, ‘urls_porn.txt’: ‘./picture_get_by_url/raw_data/porn/urls_porn.txt’, ‘urls_sexy.txt’: ‘./picture_get_by_url/raw_data/sexy/urls_sexy.txt’}reading file: urls_drawings.txtTry downloading file: http://41.media.tumblr.com/xxxxxx.jpgTry downloading file: http://41.media.tumblr.com/xxxxxx.jpgTry downloading file: http://ak1.polyvoreimg.com/cgi/img-thing/size/l/tid/xxxxxx.jpgError occurred when downloading file, error message:HTTP Error 502: No data received from server or forwarderTry downloading file: http://akicocotte.weblike.jp/gaugau/xxxxxx.jpgTry downloading file: http://animewriter.files.wordpress.com/2009/01/nagisa-xxxxxx-xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpgTry downloading file: http://cdn.awwni.me/xxxxxx.jpg

后注:由于樣本數據集內容的問題,上述地址以xxxxx代替具體地址,案例項目也已經失效,但是方法仍然可以借鑒

20.9.23更新:數據集地址:https://github.com/ZQ-Qi/nsfw_data_scrapper,單純為了學習和實踐本文代碼的可以下載該數據集進行嘗試

到此這篇關于Python根據URL地址下載文件并保存至對應目錄的實現的文章就介紹到這了,更多相關Python URL下載文件內容請搜索好吧啦網以前的文章或繼續瀏覽下面的相關文章希望大家以后多多支持好吧啦網!

標簽: Python 編程
相關文章:
主站蜘蛛池模板: 久久久精品网 | 一级全毛片 | 人人干网站 | 国产不卡免费视频 | 日韩小视频在线播放 | 午夜精品 | av一区二区在线观看 | 欧美精品综合 | 国内自拍偷拍视频 | 日韩成人影院在线观看 | 视频一区 中文字幕 | 欧美日韩综合在线 | 91久久精品日日躁夜夜躁欧美 | 欧美精品第十页 | 亚洲三区电影 | 日韩一区二区在线播放 | 日本福利网站 | 日韩日韩日韩日韩日韩日韩 | 一级片手机免费看 | 国产精品免费av | 国产精品福利一区 | 台湾佬亚洲色图 | 亚洲精品国产电影 | 国产精品久久久久国产a级 99精品欧美一区二区三区综合在线 | 日韩精品免费在线视频 | 日韩欧美视频一区 | 久久99深爱久久99精品 | 男人的天堂一级片 | 国产精品久久久久久久免费大片 | 国产成人精品在线观看 | 最新中文字幕 | 国产欧美一二三区在线粉嫩 | 天堂av2020 | 日韩av免费看 | 一区在线视频 | 在线精品一区 | 日韩在线免费观看网站 | 丝袜 亚洲 另类 欧美 综合 | 精品亚洲永久免费精品 | 成人妇女免费播放久久久 | 精品美女在线观看视频在线观看 |