1.8. 绘制图片 Drawing Images

1.8. 绘制图片 Drawing Images

As mentioned previously, OpenGL has a great deal of support for drawing images in addition to

its support for drawing 3D geometry. In OpenGL parlance, images are called
PIXEL RECTANGLES.

The values that define a pixel rectangle start out in application-controlled memory as shown in

Figure 1.1 (11). Color or grayscale pixel rectangles are rendered into the frame buffer with

glDrawPixels, and bitmaps are rendered into the frame buffer with
glBitmap. Images that are

destined for texture memory are specified with glTexImage
or glTexSubImage. Up to a point, the

same basic processing is applied to the image data supplied with each of these commands.

1.8.1. Pixel Unpacking

OpenGL reads image data provided by the application in a variety of formats. Parameters that

define how the image data is stored in memory (length of each pixel row, number of rows to

skip before the first one, number of pixels to skip before the first one in each row, etc.) can be

specified with glPixelStore. So that operations on pixel data can be defined more precisely, pixels

read from application memory are converted into a coherent stream of pixels by an operation

referred to as PIXEL UNPACKING (12). When a pixel rectangle is transferred to OpenGL by a call like

glDrawPixels, this operation applies the current set of pixel unpacking parameters to determine

how the image data should be read and interpreted. As each pixel is read from memory, it is

converted to a PIXEL GROUP that contains either a color, a depth, or a stencil value. If the pixel

group consists of a color, the image data is destined for the color buffer in the frame buffer. If

the pixel group consists of a depth value, the image data is destined for the depth buffer. If the

pixel group consists of a stencil value, the image data is destined for the stencil buffer. Color

values are made up of a red, a green, a blue, and an alpha component (i.e., RGBA) and are

constructed from the input image data according to a set of rules defined by OpenGL. The result

is a stream of RGBA values that are sent to OpenGL for further processing.

1.8.2. Pixel Transfer

After a coherent stream of image pixels is created, pixel rectangles undergo a series of

operations called PIXEL TRANSFER (13). These operations are applied whenever pixel rectangles are

transferred from the application to OpenGL (glDrawPixels,
glTexImage, glTexSubImage), from OpenGL

back to the application (glReadPixels), or when they are copied within OpenGL (glCopyPixels,

glCopyTexImage, glCopyTexSubImage).

The behavior of the pixel transfer stage is modified with
glPixelTransfer. This command sets state

that controls whether red, green, blue, alpha, and depth values are scaled and biased. It can

also set state that determines whether incoming color or stencil values are mapped to different

color or stencil values through the use of a lookup table. The lookup tables used for these

operations are specified with the glPixelMap
command.

Some additional operations that occur at this stage are part of the OpenGL
IMAGING SUBSET, which

is an optional part of OpenGL. Hardware vendors that find it important to support advanced

imaging capabilities will support the imaging subset in their OpenGL implementations, and other

vendors will not support it. To determine whether the imaging subset is supported, applications

need to call glGetString with the symbolic constant GL_EXTENSIONS. This returns a list of

extensions supported by the implementation; the application should check for the presence of

the string "ARB_imaging" within the returned extension string.

The pixel transfer operations that are defined to be part of the imaging subset are convolution,

color matrix, histogram, min/max, and additional color lookup tables. Together, they provide

powerful image processing and color correction operations on image data as it is being

transferred to, from, or within OpenGL.

1.8.3. Rasterization and Back-End Processing

Following the pixel transfer stage, fragments are generated through rasterization of pixel

rectangles in much the same way as they are generated from 3D geometry (14). This process,

along with the current OpenGL state, determines where the image will be drawn in the frame

buffer. Rasterization takes into account the current
RASTER POSITION, which can be set with

glRasterPos or glWindowPos, and the current zoom factor, which can be set with
glPixelZoom and

which causes an image to be magnified or reduced in size as it is drawn.

After fragments have been generated from pixel rectangles, they undergo the same set of

fragment processing operations as geometric primitives (6) and then go on to the remainder of

the OpenGL pipeline in exactly the same manner as geometric primitives, all the way until

pixels are deposited in the frame buffer (8, 9, 10).

Pixel values provided through a call to glTexImage
or glTexSubImage do not go through rasterization

or the subsequent fragment processing but directly update the appropriate portion of texture

memory (15).

1.8.4. Read Control

Pixel rectangles are read from the frame buffer and returned to application memory with

glReadPixels. They can also be read from the frame buffer and written to another portion of the

frame buffer with glCopyPixels, or they can be read from the frame buffer and written into texture

memory with glCopyTexImage or
glCopyTexSubImage. In all of these cases, the portion of the frame

buffer that is to be read is controlled by the READ CONTROL
stage of OpenGL and set with the

glReadBuffer command (16).

The values read from the frame buffer are sent through the pixel transfer stage (13) in which

various image processing operations can be performed. For copy operations, the resulting pixels

are sent to texture memory or back into the frame buffer, depending on the command that

initiated the transfer. For read operations, the pixels are formatted for storage in application

memory under the control of the PIXEL PACKING
stage (17). This stage is the mirror of the pixel

unpacking stage (12), in that parameters that define how the image data is to be stored in

memory (length of each pixel row, number of rows to skip before the first one, number of pixels

to skip before the first one in each row, etc.) can be specified with
glPixelStore. Thus, application

developers enjoy a lot of flexibility in determining how the image data is returned from OpenGL

into application memory.

时间: 2024-08-08 04:45:51

1.8. 绘制图片 Drawing Images的相关文章

OpenGl第三章后续,纹理,绘制图片,文字

OpenGl第三章后续,纹理,绘制图片,文字,直接 // 创建文理gl.glEnable(GL10.GL_TEXTURE_2D);texturesBuffer = IntBuffer.allocate(1);gl.glGenTextures(1, texturesBuffer);gl.glBindTexture(GL10.GL_TEXTURE_2D, texturesBuffer.get(0)); // 设置文理的参数gl.glTexParameterx(GL10.GL_TEXTURE_2D,

js+html5绘制图片到canvas的方法

  本文实例讲述了js+html5绘制图片到canvas的方法.分享给大家供大家参考.具体实现方法如下: ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 <!DOCTYPE html> <html> <body> <canvas id="myCanvas" width="200" height="100" style="border:1px soli

[Qt教程] 第15篇 2D绘图(五)绘制图片

[Qt教程] 第15篇 2D绘图(五)绘制图片 楼主  发表于 2013-5-2 17:59:00 | 查看: 886| 回复: 3 绘制图片 版权声明 该文章原创于Qter开源社区(www.qter.org),作者yafeilinux,转载请注明出处! 导语 Qt提供了四个类来处理图像数据:QImage.QPixmap.QBitmap和QPicture,它们也都是常用的绘图设备.其中QImage主要用来进行I/O处理,它对I/O处理操作进行了优化,而且也可以用来直接访问和操作像素:QPixma

【Android开发】图形图像处理技术-绘制图片

在Android中,Canvas类不仅可以绘制几何图形.文件和路径,还可以用来绘制图片.想要使用Canvas类绘制图片,只需要使用Canvas类提供的如下表所示的方法将Bitmap对象中保存的图片绘制到画布上即可. drawBitmap(Bitmap bitmap,Rect src,RectF dst,Paint paint) 用于从指定点绘制从源位图中"挖取"的一块 drawBitmap(Bitmap bitmap,float left,float top,Paint paint)

1.7. 绘制几何体 Drawing Geometry

1.7. 绘制几何体 Drawing Geometry As you can see from Figure 1.1, data for drawing geometry (points, lines, and polygons) starts off in application-controlled memory (1). This memory may be on the host CPU, or, with the help of some recent additions to Ope

js+html5绘制图片到canvas的方法_javascript技巧

本文实例讲述了js+html5绘制图片到canvas的方法.分享给大家供大家参考.具体实现方法如下: <!DOCTYPE html> <html> <body> <canvas id="myCanvas" width="200" height="100" style="border:1px solid #c3c3c3;"> Your browser does not suppor

利用iOS绘制图片生成随机验证码示例代码_IOS

先来看看效果图 实现方法 .h文件 @property (nonatomic, retain) NSArray *changeArray; @property (nonatomic, retain) NSMutableString *changeString; @property (nonatomic, retain) UILabel *codeLabel; -(void)changeCode; @end .m文件 @synthesize changeArray = _changeArray;

HTML5 canvas 9绘制图片实例详解_javascript技巧

绘制图片 Var image=new Image(); image.src=" http://img4.duitang.com/uploads/item/201406/25/20140625182321_4MTau.thumb.700_0.jpeg"; image.onload=function(){} Context.drawImage(image,x,y); Context.drawImage(image,x,y,w,h); Context.drawIamge(image,sx,s

关于android自动适应机型分辨率和旋转绘制图片帧的问题.

问题描述 如题自动适应只需要2套图+XML里面这几句话就可以吗?设置图片宽和高为fill_parent就自适应了<ImageView android:layout_height="fill_parent" android:layout_width="fill_parent" android:id="@+id/imageView"> </ImageView>第2个问题是我有一张图片 里面有很多帧我只想绘制上去其中一帧 并且旋