Swift 视频录制之设置拍摄窗口大小,录制正方形视频

在之前的两篇文章中,我介绍了如何通过 AVFoundation.framework 框架提供的 AVCaptureSession 类来实现视频的录制。以及通过 AVMutableComposition 来将多段视频片段的视频、音频轨道进行拼接合成。
这两个样例我们用的都是全屏录像,拍摄下的视频是竖的(iPhone6拍摄的分辨率是1080*1920)。

但我们看市面上常见的视频APP,拍摄的小视频都是正方形,或者是横向的矩形。本文演示如何修改视频拍摄尺寸,这里以实现正方形视频的拍摄为例。

1,要实现正方形视频的拍摄要从下面几方面入手:
(1)预览窗口(AVCaptureVideoPreviewLayer)尺寸设置为正方矩形(长宽均为屏幕宽度),并在整个屏幕垂直居中。
(2)视频片段录制的时候还是使用完整的分辨率尺寸进行录制。
(3)在将各个视频片段合并输出时,我们通过 AVMutableVideoComposition 实现视频的裁剪,即从视频的中央位置裁出一个正方形的内容并保存文件。

2,效果图如下:

 

3,样例代码

import UIKit
import AVFoundation
import Photos
import AVKit
 
class ViewController: UIViewController , AVCaptureFileOutputRecordingDelegate {
     
    //视频捕获会话。它是input和output的桥梁。它协调着intput到output的数据传输
    let captureSession = AVCaptureSession()
    //视频输入设备
    let videoDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
    //音频输入设备
    let audioDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
    //将捕获到的视频输出到文件
    let fileOutput = AVCaptureMovieFileOutput()
     
    //录制、保存按钮
    var recordButton, saveButton : UIButton!
     
    //保存所有的录像片段数组
    var videoAssets = [AVAsset]()
    //保存所有的录像片段url数组
    var assetURLs = [String]()
    //单独录像片段的index索引
    var appendix: Int32 = 1
     
    //最大允许的录制时间(秒)
    let totalSeconds: Float64 = 15.00
    //每秒帧数
    var framesPerSecond:Int32 = 30
    //剩余时间
    var remainingTime : NSTimeInterval = 15.0
     
    //表示是否停止录像
    var stopRecording: Bool = false
    //剩余时间计时器
    var timer: NSTimer?
    //进度条计时器
    var progressBarTimer: NSTimer?
    //进度条计时器时间间隔
    var incInterval: NSTimeInterval = 0.05
    //进度条
    var progressBar: UIView = UIView()
    //当前进度条终点位置
    var oldX: CGFloat = 0
     
    override func viewDidLoad() {
        super.viewDidLoad()
         
        //背景色设为黑色
        self.view.backgroundColor = UIColor.blackColor()
         
        //添加视频、音频输入设备
        let videoInput = try! AVCaptureDeviceInput(device: self.videoDevice)
        self.captureSession.addInput(videoInput)
        let audioInput = try! AVCaptureDeviceInput(device: self.audioDevice)
        self.captureSession.addInput(audioInput);
         
        //添加视频捕获输出
        let maxDuration = CMTimeMakeWithSeconds(totalSeconds, framesPerSecond)
        self.fileOutput.maxRecordedDuration = maxDuration
        self.captureSession.addOutput(self.fileOutput)
         
        //使用AVCaptureVideoPreviewLayer可以将摄像头的拍摄的实时画面显示在ViewController上
        let videoLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
        //预览窗口是正方形,在屏幕居中(显示的也是摄像头拍摄的中心区域)
        videoLayer.frame = CGRectMake(0, self.view.bounds.height/4,
                                      self.view.bounds.width, self.view.bounds.width)
        videoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
        videoLayer.pointForCaptureDevicePointOfInterest(CGPoint(x: 0, y: 0))
        self.view.layer.addSublayer(videoLayer)
         
        //创建按钮
        self.setupButton()
        //启动session会话
        self.captureSession.startRunning()
         
        //添加进度条
        progressBar.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width,
                                   height: self.view.bounds.height * 0.1)
        progressBar.backgroundColor = UIColor(red: 4, green: 3, blue: 3, alpha: 0.5)
        self.view.addSubview(progressBar)
    }
     
    //创建按钮
    func setupButton(){
        //创建录制按钮
        self.recordButton = UIButton(frame: CGRectMake(0,0,120,50))
        self.recordButton.backgroundColor = UIColor.redColor();
        self.recordButton.layer.masksToBounds = true
        self.recordButton.setTitle("按住录像", forState: .Normal)
        self.recordButton.layer.cornerRadius = 20.0
        self.recordButton.layer.position = CGPoint(x: self.view.bounds.width/2,
                                                   y:self.view.bounds.height-50)
        self.recordButton.addTarget(self, action: #selector(onTouchDownRecordButton(_:)),
                                    forControlEvents: .TouchDown)
        self.recordButton.addTarget(self, action: #selector(onTouchUpRecordButton(_:)),
                                    forControlEvents: .TouchUpInside)
         
        //创建保存按钮
        self.saveButton = UIButton(frame: CGRectMake(0,0,70,50))
        self.saveButton.backgroundColor = UIColor.grayColor();
        self.saveButton.layer.masksToBounds = true
        self.saveButton.setTitle("保存", forState: .Normal)
        self.saveButton.layer.cornerRadius = 20.0
         
        self.saveButton.layer.position = CGPoint(x: self.view.bounds.width - 60,
                                                 y:self.view.bounds.height-50)
        self.saveButton.addTarget(self, action: #selector(onClickStopButton(_:)),
                                  forControlEvents: .TouchUpInside)
         
        //添加按钮到视图上
        self.view.addSubview(self.recordButton);
        self.view.addSubview(self.saveButton);
    }
     
    //按下录制按钮,开始录制片段
    func  onTouchDownRecordButton(sender: UIButton){
        if(!stopRecording) {
            let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,
                                                            .UserDomainMask, true)
            let documentsDirectory = paths[0] as String
            let outputFilePath = "\(documentsDirectory)/output-\(appendix).mov"
            appendix += 1
            let outputURL = NSURL(fileURLWithPath: outputFilePath)
            let fileManager = NSFileManager.defaultManager()
            if(fileManager.fileExistsAtPath(outputFilePath)) {
                 
                do {
                    try fileManager.removeItemAtPath(outputFilePath)
                } catch _ {
                }
            }
            print("开始录制:\(outputFilePath) ")
            fileOutput.startRecordingToOutputFileURL(outputURL, recordingDelegate: self)
        }
    }
     
    //松开录制按钮,停止录制片段
    func  onTouchUpRecordButton(sender: UIButton){
        if(!stopRecording) {
            timer?.invalidate()
            progressBarTimer?.invalidate()
            fileOutput.stopRecording()
        }
    }
     
    //录像开始的代理方法
    func captureOutput(captureOutput: AVCaptureFileOutput!,
                       didStartRecordingToOutputFileAtURL fileURL: NSURL!,
                        fromConnections connections: [AnyObject]!) {
        startProgressBarTimer()
        startTimer()
    }
     
    //录像结束的代理方法
    func captureOutput(captureOutput: AVCaptureFileOutput!,
                       didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!,
                        fromConnections connections: [AnyObject]!, error: NSError!) {
        let asset : AVURLAsset = AVURLAsset(URL: outputFileURL, options: nil)
        var duration : NSTimeInterval = 0.0
        duration = CMTimeGetSeconds(asset.duration)
        print("生成视频片段:\(asset)")
        videoAssets.append(asset)
        assetURLs.append(outputFileURL.path!)
        remainingTime = remainingTime - duration
         
        //到达允许最大录制时间,自动合并视频
        if remainingTime <= 0 {
            mergeVideos()
        }
    }
     
    //剩余时间计时器
    func startTimer() {
        timer = NSTimer(timeInterval: remainingTime, target: self,
                        selector: #selector(ViewController.timeout), userInfo: nil,
                        repeats:true)
        NSRunLoop.currentRunLoop().addTimer(timer!, forMode: NSDefaultRunLoopMode)
    }
     
    //录制时间达到最大时间
    func timeout() {
        stopRecording = true
        print("时间到。")
        fileOutput.stopRecording()
        timer?.invalidate()
        progressBarTimer?.invalidate()
    }
     
    //进度条计时器
    func startProgressBarTimer() {
        progressBarTimer = NSTimer(timeInterval: incInterval, target: self,
                                   selector: #selector(ViewController.progress),
                                   userInfo: nil, repeats: true)
        NSRunLoop.currentRunLoop().addTimer(progressBarTimer!,
                                            forMode: NSDefaultRunLoopMode)
    }
     
    //修改进度条进度
    func progress() {
        let progressProportion: CGFloat = CGFloat(incInterval / totalSeconds)
        let progressInc: UIView = UIView()
        progressInc.backgroundColor = UIColor(red: 55/255, green: 186/255, blue: 89/255,
                                              alpha: 1)
        let newWidth = progressBar.frame.width * progressProportion
        progressInc.frame = CGRect(x: oldX , y: 0, width: newWidth,
                                   height: progressBar.frame.height)
        oldX = oldX + newWidth
        progressBar.addSubview(progressInc)
    }
     
    //保存按钮点击
    func onClickStopButton(sender: UIButton){
        mergeVideos()
    }
     
    //合并视频片段
    func mergeVideos() {
        let duration = totalSeconds
         
        let composition = AVMutableComposition()
        //合并视频、音频轨道
        let firstTrack = composition.addMutableTrackWithMediaType(
            AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
        let audioTrack = composition.addMutableTrackWithMediaType(
            AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
         
        var insertTime: CMTime = kCMTimeZero
        for asset in videoAssets {
            print("合并视频片段:\(asset)")
            do {
                try firstTrack.insertTimeRange(
                    CMTimeRangeMake(kCMTimeZero, asset.duration),
                    ofTrack: asset.tracksWithMediaType(AVMediaTypeVideo)[0] ,
                    atTime: insertTime)
            } catch _ {
            }
            do {
                try audioTrack.insertTimeRange(
                    CMTimeRangeMake(kCMTimeZero, asset.duration),
                    ofTrack: asset.tracksWithMediaType(AVMediaTypeAudio)[0] ,
                    atTime: insertTime)
            } catch _ {
            }
             
            insertTime = CMTimeAdd(insertTime, asset.duration)
        }
        //旋转视频图像,防止90度颠倒
        firstTrack.preferredTransform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
         
        //定义最终生成的视频尺寸(矩形的)
        print("视频原始尺寸:", firstTrack.naturalSize)
        let renderSize = CGSizeMake(firstTrack.naturalSize.height, firstTrack.naturalSize.height)
        print("最终渲染尺寸:", renderSize)
         
        //通过AVMutableVideoComposition实现视频的裁剪(矩形,截取正中心区域视频)
        let videoComposition = AVMutableVideoComposition()
        videoComposition.frameDuration = CMTimeMake(1, framesPerSecond)
        videoComposition.renderSize = renderSize
         
        let instruction = AVMutableVideoCompositionInstruction()
        instruction.timeRange = CMTimeRangeMake(
            kCMTimeZero,CMTimeMakeWithSeconds(Float64(duration), framesPerSecond))
         
        let transformer: AVMutableVideoCompositionLayerInstruction =
            AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
        let t1 = CGAffineTransformMakeTranslation(firstTrack.naturalSize.height,
                    -(firstTrack.naturalSize.width-firstTrack.naturalSize.height)/2)
        let t2 = CGAffineTransformRotate(t1, CGFloat(M_PI_2))
        let finalTransform: CGAffineTransform = t2
        transformer.setTransform(finalTransform, atTime: kCMTimeZero)
         
        instruction.layerInstructions = [transformer]
        videoComposition.instructions = [instruction]
         
        //获取合并后的视频路径
        let documentsPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,
                                                                .UserDomainMask,true)[0]
        let destinationPath = documentsPath + "/mergeVideo-\(arc4random()%1000).mov"
        print("合并后的视频:\(destinationPath)")
        let videoPath: NSURL = NSURL(fileURLWithPath: destinationPath as String)
        let exporter = AVAssetExportSession(asset: composition,
                                            presetName:AVAssetExportPresetHighestQuality)!
        exporter.outputURL = videoPath
        exporter.outputFileType = AVFileTypeQuickTimeMovie
        exporter.videoComposition = videoComposition //设置videoComposition
        exporter.shouldOptimizeForNetworkUse = true
        exporter.timeRange = CMTimeRangeMake(
            kCMTimeZero,CMTimeMakeWithSeconds(Float64(duration), framesPerSecond))
        exporter.exportAsynchronouslyWithCompletionHandler({
            //将合并后的视频保存到相册
            self.exportDidFinish(exporter)
        })
    }
     
    //将合并后的视频保存到相册
    func exportDidFinish(session: AVAssetExportSession) {
        print("视频合并成功!")
        let outputURL: NSURL = session.outputURL!
        //将录制好的录像保存到照片库中
        PHPhotoLibrary.sharedPhotoLibrary().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(outputURL)
            }, completionHandler: { (isSuccess: Bool, error: NSError?) in
                dispatch_async(dispatch_get_main_queue(),{
                    //重置参数
                    self.reset()
                     
                    //弹出提示框
                    let alertController = UIAlertController(title: "视频保存成功",
                        message: "是否需要回看录像?", preferredStyle: .Alert)
                    let okAction = UIAlertAction(title: "确定", style: .Default, handler: {
                        action in
                        //录像回看
                        self.reviewRecord(outputURL)
                    })
                    let cancelAction = UIAlertAction(title: "取消", style: .Cancel,
                        handler: nil)
                    alertController.addAction(okAction)
                    alertController.addAction(cancelAction)
                    self.presentViewController(alertController, animated: true,
                        completion: nil)
                })
        })
    }
     
    //视频保存成功,重置各个参数,准备新视频录制
    func reset() {
        //删除视频片段
        for assetURL in assetURLs {
            if(NSFileManager.defaultManager().fileExistsAtPath(assetURL)) {
                do {
                    try NSFileManager.defaultManager().removeItemAtPath(assetURL)
                } catch _ {
                }
                print("删除视频片段: \(assetURL)")
            }
        }
         
        //进度条还原
        let subviews = progressBar.subviews
        for subview in subviews {
            subview.removeFromSuperview()
        }
         
        //各个参数还原
        videoAssets.removeAll(keepCapacity: false)
        assetURLs.removeAll(keepCapacity: false)
        appendix = 1
        oldX = 0
        stopRecording = false
        remainingTime = totalSeconds
    }
     
    //录像回看
    func reviewRecord(outputURL: NSURL) {
        //定义一个视频播放器,通过本地文件路径初始化
        let player = AVPlayer(URL: outputURL)
        let playerViewController = AVPlayerViewController()
        playerViewController.player = player
        self.presentViewController(playerViewController, animated: true) {
            playerViewController.player!.play()
        }
    }
}

时间: 2024-10-14 11:49:39

Swift 视频录制之设置拍摄窗口大小,录制正方形视频的相关文章

与众不同 windows phone (21) - Device(设备)之摄像头(拍摄照片, 录制视频)

原文:与众不同 windows phone (21) - Device(设备)之摄像头(拍摄照片, 录制视频) [索引页][源码下载] 与众不同 windows phone (21) - Device(设备)之摄像头(拍摄照片, 录制视频) 作者:webabcd 介绍与众不同 windows phone 7.5 (sdk 7.1) 之设备 用摄像头拍摄照片 用摄像头录制视频 示例1.演示如何使用摄像头拍摄照片ImageDemo.xaml <phone:PhoneApplicationPage x

【短视频SDK】 如何实现变速录制做一个像抖音、muse这样的短视频应用

之前我们发了一个关于阿里云短视频SDK上线前置变速录制+音乐的博文见:阿里云首推音乐变速短视频SDK,上线抖音 freestyle只需1步.但是具体如何做呢?看起来这么高大上的功能使用SDK集成简单吗? 接口简析 上面说了所谓的抖音特效事实上我们理解分为两个功能点:变速录制+录制时播放并添加背景音乐.对应到SDK中如何设置呢? 变速录制:变速录制事实上就是希望感官上看到的视频是速度变快的,且能够卡主音乐的时间点以达到快播的效果.开发者仅需要设置录制速率即可 录制时播放音乐并添加音乐:需要根据设置

微信聊天和朋友圈可以拍摄和分享大视频?

微信小视频不过瘾?马上大视频就来了.从网友曝光的图片来看,iOS微信6.3.31版已经开始了内部测试,新版本最大的特色就是加入了"聊天和朋友圈可以拍摄和分享大视频".这个所谓的大视频定义目前还不清楚,不过从介绍来看应该是在聊天界面的拍照选项中进入,单击拍照,按住即可录制. 另外,从截图来看,视频的体积确实要比之前的小视频大不少. 不知道小视频会不会跟大视频同时存在...

火莹视频桌面能设置多个视频吗 火莹视频桌面设置方法

  火莹视频桌面可以设置动态的桌面壁纸哦,可以说是中国版的wallpaper engine,小伙伴们使用这个软件可以设置动态的桌面,很多小伙伴想用多个视频做成桌面呢,火莹视频桌面能设置多个视频吗,火莹视频桌面怎么设置,西西小编来为大家介绍. 火莹视频桌面能设置多个视频吗 不可以哦,不过可以使用软件将几个视频剪辑在一起呢! 视频壁纸文件都是可以自行添加的,当然文件越大,所需要的运行内存也越多,大家要根据自己的机器配置进行使用

拍大师发布视频作品如何设置视频封面

我们看视频或上传视频时,都会在网页上看到每个视频有一个封面(见空间视频展示图),这些封面图是怎样生成的啊?如果想用视频中的某一画面作为整个视频的封面,在哪里可以设置? 空间视频展示图:现在就和大家分享拍大师发布视频时设置视频封面的小技巧,让作品看起来更加与众不同,吸引更多粉丝关注,提升作品人气. 图0 第1步:如下图1,用拍大师创作好视频上传分享时将会出现[作品发布信息]的编辑界面,点击[设置缩略图]开始设置视频封面. 图1 第2步:如下图2,点击[设置缩略图]后,可以在系统自动生成的作品缩略图

从图片杀入视频领域,美图秀秀推短视频应用美拍

摘要: 美图秀秀今天推出了一款名为美拍的短视频应用,正式从图片杀入短视频领域. 从功能上讲,美拍 (暂只有iOS版)同其他同类产品没有太多不同,用户通过新浪微博/Facebook 账号登陆之后 美图秀秀今天推出了一款名为"美拍"的短视频应用,正式从图片杀入短视频领域. 从功能上讲,美拍 (暂只有iOS版)同其他同类产品没有太多不同,用户通过新浪微博/Facebook 账号登陆之后,便可进行视频取材.拍摄上同样采用了 Vine 的可停顿方式,录制完成之后,可添加滤镜,也可直接挑选内置的

从图片杀入视频领域,美图秀秀推短视频应用

摘要: 美图秀秀今天推出了一款名为美拍的短视频应用,正式从图片杀入短视频领域. 从功能上讲,美拍 (暂只有iOS版)同其他同类产品没有太多不同,用户通过新浪微博/Facebook 账号登陆之后 美图秀秀今天推出了一款名为"美拍"的短视频应用,正式从图片杀入短视频领域. 从功能上讲,美拍 (暂只有iOS版)同其他同类产品没有太多不同,用户通过新浪微博/Facebook 账号登陆之后,便可进行视频取材.拍摄上同样采用了 Vine 的可停顿方式,录制完成之后,可添加滤镜,也可直接挑选内置的

社交视频网站YouTube已经开始出现60 FPS 4K视频

近日,科技博客TechCrunch消息显示,社交视频网站YouTube已经开始出现60 FPS 4K视频,或许这仅仅是一项测试,仅限于影片预告或者其他内容. 4K 目前,YouTube上的大多数视频还远远达不到4K水准,并且能够拍摄4K显示屏的摄像机也很少,可以肯定的是,这类设备价格将非常昂贵.如果你的计算机配置足够强大,那你就可以欣赏到YouTube上的4K内容. 近期,YouTube宣布,它们将推出4K视频搜索服务,可以让用户在YouTube网站上很容易地搜索到4K超高清视频.目前YouTu

视频格式 MPEG4 的软件及制作_视频相关

1. Premiere5.5 Premiere5.5是最受欢迎的视频编辑软件,它支持MPEG4,我们可以先用它来编辑MPEG4文件,就象编辑的普通的视频一样.另外,在视频项目编辑完成后,如果要生成MPEG4,可选择File/Export/Movie,会出现Export Movie对话框,如图1所示 图1导出电影文件对话框 点Settings按钮,弹出Export Movie Settings设置面板,在第一个下拉列表中选Video Settings,然后在Compressor(压缩方式)项,也就