1.首先得把界面转化成图片,给uiview加一个类目如下:

#import “UIView+Screen.h”

@implementation UIView (Screen)

//截取界面转化成图片

-(UIImage *)convertViewToImage

{

UIGraphicsBeginImageContext(self.bounds.size);

[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

return image;

}

@end

2.得到截屏图片后进行模糊处理

给uiImage 添加一个类别方法

使用vImage API进行模糊

iOS5.0中新增了vImage API可以使用,它属于Accelerate.Framework,所以如果你要使用它要在工程中加入这个Framework。模糊算法使用的是vImageBoxConvolve_ARGB8888这个函数。

– (UIImage *)blurredImageWithRadius:(CGFloat)radius iterations:(NSUInteger)iterations tintColor:(UIColor *)tintColor

{

//image must be nonzero size

if (floorf(self.size.width) * floorf(self.size.height) <= 0.0f) return self;

//boxsize must be an odd integer

uint32_t boxSize = (uint32_t)(radius * self.scale);

if (boxSize % 2 == 0) boxSize ++;

//create image buffers

CGImageRef imageRef = self.CGImage;

vImage_Buffer buffer1, buffer2;

buffer1.width = buffer2.width = CGImageGetWidth(imageRef);

buffer1.height = buffer2.height = CGImageGetHeight(imageRef);

buffer1.rowBytes = buffer2.rowBytes = CGImageGetBytesPerRow(imageRef);

size_t bytes = buffer1.rowBytes * buffer1.height;

buffer1.data = malloc(bytes);

buffer2.data = malloc(bytes);

//create temp buffer

void *tempBuffer = malloc((size_t)vImageBoxConvolve_ARGB8888(&buffer1, &buffer2, NULL, 0, 0, boxSize, boxSize,

NULL, kvImageEdgeExtend + kvImageGetTempBufferSize));

//copy image data

CFDataRef dataSource = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));

memcpy(buffer1.data, CFDataGetBytePtr(dataSource), bytes);

CFRelease(dataSource);

for (NSUInteger i = 0; i < iterations; i++)

{

//perform blur

vImageBoxConvolve_ARGB8888(&buffer1, &buffer2, tempBuffer, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);

//swap buffers

void *temp = buffer1.data;

buffer1.data = buffer2.data;

buffer2.data = temp;

}

//free buffers

free(buffer2.data);

free(tempBuffer);

//create image context from buffer

CGContextRef ctx = CGBitmapContextCreate(buffer1.data, buffer1.width, buffer1.height,

8, buffer1.rowBytes, CGImageGetColorSpace(imageRef),

CGImageGetBitmapInfo(imageRef));

//apply tint

if (tintColor && CGColorGetAlpha(tintColor.CGColor) > 0.0f)

{

CGContextSetFillColorWithColor(ctx, [tintColor colorWithAlphaComponent:0.25].CGColor);

CGContextSetBlendMode(ctx, kCGBlendModePlusLighter);

CGContextFillRect(ctx, CGRectMake(0, 0, buffer1.width, buffer1.height));

}

//create image from context

imageRef = CGBitmapContextCreateImage(ctx);

UIImage *image = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];

CGImageRelease(imageRef);

CGContextRelease(ctx);

free(buffer1.data);

return image;

}

此时已经拥有了模糊的背景图片,下面就简单了。

 

方法二 就是用coreImage中苹果提供的滤镜效果,但是此方法效率低且需要转化时相对间比较长

//CPU渲染。。慢效率低,为了避免线程阻塞,最好放在子线程里。。

– (UIImage *)blur{

CIContext *context = [CIContext contextWithOptions:nil];

CIImage *imageToBlur = [[CIImage alloc]initWithImage:_imgview.image];

CIFilter *filter = [CIFilter filterWithName:@”CIGaussianBlur” keysAndValues:kCIInputImageKey,imageToBlur ,nil];

_outputCIImage = [filter outputImage];

UIImage *img = [UIImage imageWithCGImage:[context createCGImage:_outputCIImage fromRect:_outputCIImage.extent]];

return img;

}

方法三就是用iOS8新出的功能。特别方便,还能支持实时模糊,缺点就是只能iOS8以上使用

//iOS8苹果自带的毛玻璃效果

– (IBAction)iOS8blurAction:(id)sender {

UIBlurEffect *beffect = [UIBlurEffect effectWithStyle:UIBlurEffectStyleExtraLight];

UIVisualEffectView *view = [[UIVisualEffectView alloc]initWithEffect:beffect];

view.frame = self.bounds;

[self addSubview:view];

}

版权声明:本文为jgCho原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/jgCho/p/4939837.html