Галерея 3004477

Галерея 3004477




🔞 ПОДРОБНЕЕ ЖМИТЕ ТУТ 👈🏻👈🏻👈🏻

































Галерея 3004477

[

{
"catentry_id" : "3557560",
"Attributes" : { }
}

]


Este sitio utiliza cookies para ofrecerle la mejor experiencia posible. Si continúa navegando o si permanece en el sitio, acepta el uso que hagamos de tales cookies de acuerdo con nuestra 
Política de privacidad.
 . Obtenga más información, incluso cómo gestionar las cookies
aquí.


Enviar esta página por correo electrónico


Cancelar
Enviar correo electrónico





Nombre comercial:


LORD-Pak 50 mL Plunger (10:1)



Tamaño:


Mix Tip: 0.21" x 4.5" x 24 elements





FAILURE OR IMPROPER SELECTION OR IMPROPER USE OF THE PRODUCTS DESCRIBED HEREIN OR RELATED ITEMS CAN CAUSE DEATH, PERSONAL INJURY AND PROPERTY DAMAGE. This document and other information from Parker-Hannifin Corporation, its subsidiaries and authorized distributors provide product or system options for further investigation by users having technical expertise. The user, through its own analysis and testing, is solely responsible for making the final selection of the system and components and assuring that all performance, endurance, maintenance, safety and warning requirements of the application are met. The user must analyze all aspects of the application, follow applicable industry standards, and follow the information concerning the product in the current product catalog and in any other materials provided from Parker or its subsidiaries or authorized distributors. To the extent that Parker or its subsidiaries or authorized distributors provide component or system options based upon data or specifications provided by the user, the user is responsible for determining that such data and specifications are suitable and sufficient for all applications and reasonably foreseeable uses of the components or systems..
Click here to view all product safety information.

{"qtyunitData":{"count":"0","MEASUREPRIMARYDESC":"part"},"ParentStCatbuy":"","maxQty":0.0,"buyable":"","minQty":0.0,"quotable":"","availability":null}





Encontrar un distribuidor



Puntos de venta


{"parentMemberDN":"dc=parker,dc=corp","organizationDistinguishedName":"dc=parker,dc=corp","resourceId":"https:\/\/cor089w46.us.parker.corp\/wcs\/resources\/lbs\/store\/23551\/person\/@self","contactUrl":"https:\/\/cor089w46.us.parker.corp\/wcs\/resources\/lbs\/store\/23551\/person\/@self\/contact","orgizationId":"-2000","userId":"-1002","checkoutProfileUrl":"https:\/\/cor089w46.us.parker.corp\/wcs\/resources\/lbs\/store\/23551\/person\/@self\/checkoutProfile","resourceName":"person"}

The maximum number of products that can be compared is 4. Please refine your selection.


Cuadro de diálogo de aviso de falta de actividad

x



Está a punto de agotarse el tiempo de espera de su sesión por falta de actividad. Haga clic en OK para ampliar el tiempo en 0 minutos.


This item has been successfully added to your list.

x




LORD-Pak 50 mL plunger (10:1) is for use with the LORD-Pak 50 mL manual applicator to dispense 50 mL cartridges at a 10:1 mix ratio by volume. Accommodates mix tip size: 0.21" x 4.5" x 24 elements.


Dirección de correo electrónico * :

Dirección de correo electrónico * :

Dirección de correo electrónico * :

LORD-Pak 50 mL plunger (10:1) is for use with the LORD-Pak 50 mL manual applicator to dispense 50 mL cartridges at a 10:1 mix ratio by volume. • Accommodates mix tip size: 0.21" x 4.5" x 24 elements.
Assembly & Protection Solutions Division
Cree una cuenta para gestionar todo lo que haga con Parker, desde sus preferencias de compra hasta su acceso a la aplicación.

Loading [MathJax]/jax/output/HTML-CSS/imageFonts.js
All Books Conferences Courses Journals & Magazines Standards Authors Citations
The pipeline of the light field reconstruction using the proposed LightGAN model. We visualize the architecture of the high-dimensional generator and discriminator. The g... View more
Abstract: A light field image captured by a plenoptic camera can be considered a sampling of light distribution within a given space. However, with the limited pixel count of the s... View more
A light field image captured by a plenoptic camera can be considered a sampling of light distribution within a given space. However, with the limited pixel count of the sensor, the acquisition of a high-resolution sample often comes at the expense of losing parallax information. In this work, we present a learning-based generative framework to overcome such tradeoff by directly simulating the light field distribution. An important module of our model is the high-dimensional residual block, which fully exploits the spatio-angular information. By directly learning the distribution, our approach can generate both high-quality sub-aperture images and densely-sampled light fields. Experimental results on both real-world and synthetic datasets demonstrate that the proposed method outperforms other state-of-the-art approaches and achieves visually more realistic results.
Published in: IEEE Access ( Volume: 8 )
The pipeline of the light field reconstruction using the proposed LightGAN model. We visualize the architecture of the high-dimensional generator and discriminator. The g... View more
TABLE 1
Quantitative Performance of the Proposed Network Trained Using Different Loss Terms on HCI New Testset for Spatial
4\times
Task
TABLE 2
Quantitative Evaluation (PSNR) on Synthetic Light Field and Real-World Light Field for
\gamma_{s} = 4
. All Numbers are Measured in dB. Not All of the Reflective and Occlusions Scenes are Used. In This Study, We Randomly Select 20 Scenes From Each Category for Evaluation and Report of the Average PSNR Values
Z. Xu, J. Ke and E. Y. Lam, "High-resolution lightfield photography using two masks", Opt. Express , vol. 20, pp. 10971-10983, May 2012.
N. Chen, C. Zuo, E. Lam and B. Lee, "3D imaging based on depth measurement technologies", Sensors , vol. 18, no. 11, pp. 3711, Oct. 2018.
K. Mitra and A. Veeraraghavan, "Light field denoising light field superresolution and stereo camera based refocussing using a GMM light field patch prior", Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops , pp. 22-28, Jun. 2012.
T.-C. Wang, A. A. Efros and R. Ramamoorthi, "Depth estimation with occlusion modeling using light-field cameras", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 38, no. 11, pp. 2170-2181, Nov. 2016.
X. Sun, Z. Xu, N. Meng, E. Y. Lam and H. K.-H. So, "Data-driven light field depth estimation using deep convolutional neural networks", Proc. Int. Joint Conf. Neural Netw. (IJCNN) , pp. 367-374, Jul. 2016.
P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi and R. Ng, "Learning to synthesize a 4D RGBD light field from a single image", Proc. IEEE Int. Conf. Comput. Vis. (ICCV) , pp. 2243-2251, Oct. 2017.
N. Meng, T. Zeng and E. Y. Lam, "Spatial and angular reconstruction of light field based on deep generative networks", Proc. IEEE Int. Conf. Image Process. (ICIP) , pp. 4659-4663, Sep. 2019.
C. Zhang, G. Hou, Z. Zhang, Z. Sun and T. Tan, "Efficient auto-refocusing for light field camera", Pattern Recognit. , vol. 81, pp. 176-189, Sep. 2018.
E. Y. Lam, "Computational photography with plenoptic camera and light field capture: Tutorial", J. Opt. Soc. Amer. A Opt. Image Sci. , vol. 32, no. 11, pp. 2021-2032, Nov. 2015.
T. E. Bishop, S. Zanetti and P. Favaro, "Light field superresolution", Proc. IEEE Int. Conf. Comput. Photography (ICCP) , pp. 1-9, Apr. 2009.
J. Lim, H. Ok, B. Park, J. Kang and S. Lee, "Improving the spatail resolution based on 4D light field data", Proc. 16th IEEE Int. Conf. Image Process. (ICIP) , pp. 1173-1176, Nov. 2009.
P. Didyk, P. Sitthi-Amorn, W. Freeman, F. Durand and W. Matusik, "Joint view expansion and filtering for automultiscopic 3d displays", ACM Trans. Graph. , vol. 32, no. 6, pp. 221, 2013.
S. Vagharshakyan, R. Bregovic and A. Gotchev, "Light field reconstruction using shearlet transform", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 40, no. 1, pp. 133-147, Jan. 2018.
N. Meng, E. Y. Lam, K. K. Tsia and H. K.-H. So, "Large-scale multi-class image-based cell classification with deep learning", IEEE J. Biomed. Health Informat. , vol. 23, no. 5, pp. 2091-2098, Sep. 2019.
Y. Yang, H. Chen and J. Shao, "Triplet enhanced AutoEncoder: Model-free discriminative network embedding", Proc. 28th Int. Joint Conf. Artif. Intell. , pp. 5363-5369, Aug. 2019.
T.-C. Wang, J.-Y. Zhu, E. Hiroaki, M. Chandraker, A. A. Efros and R. Ramamoorthi, "A 4D light-field dataset and CNN architectures for material recognition", Proc. Eur. Conf. Comput. Vis. , pp. 121-138, Oct. 2016.
N. Meng, X. Sun, H. K.-H. So and E. Y. Lam, "Computational light field generation using deep nonparametric Bayesian learning", IEEE Access , vol. 7, pp. 24990-25000, 2019.
N. Meng, H. K.-H. So, X. Sun and E. Lam, "High-dimensional dense residual convolutional neural network for light field reconstruction", IEEE Trans. Pattern Anal. Mach. Intell. , Oct. 2019.
Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee and I. S. Kweon, "Light-field image super-resolution using convolutional neural network", IEEE Signal Process. Lett. , vol. 24, no. 6, pp. 848-852, Jun. 2017.
Y. Wang, F. Liu, K. Zhang, G. Hou, Z. Sun and T. Tan, "LFNet: A novel bidirectional recurrent convolutional neural network for light-field image super-resolution", IEEE Trans. Image Process. , vol. 27, no. 9, pp. 4274-4286, Sep. 2018.
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, et al., "Photo-realistic single image super-resolution fusing a generative adversarial network", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 105-114, Jul. 2017.
M. Mathieu, C. Couprie and Y. LeCun, "Deep multi-scale video prediction beyond mean square error", Proc. Int. Conf. Learn. Represent. , pp. 1-17, 2016.
S. Wanner and B. Goldluecke, "Variational light field analysis for disparity estimation and super-resolution", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 36, no. 3, pp. 606-619, Mar. 2014.
S. Pujades, F. Devernay and B. Goldluecke, "Bayesian view synthesis and image-based rendering principles", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , pp. 3906-3913, Jun. 2014.
T. E. Bishop and P. Favaro, "The light field camera: Extended depth of field aliasing and superresolution", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 34, no. 5, pp. 972-986, May 2012.
Y. Wang, G. Hou, Z. Sun, Z. Wang and T. Tan, "A simple and robust super resolution method for light field images", Proc. IEEE Int. Conf. Image Process. (ICIP) , pp. 1459-1463, Sep. 2016.
M. Levoy and P. Hanrahan, "Light field rendering", ACM Conf. Comput. Graph. Interact. Techn. , pp. 31-42, 1996.
Z. Lin and H.-Y. Shum, "A geometric analysis of light field rendering", Int. J. Comput. Vis. , vol. 58, no. 2, pp. 121-138, Jul. 2004.
C.-K. Liang and R. Ramamoorthi, "A light transport framework for lenslet light field cameras", ACM Trans. Graph. , vol. 34, no. 2, pp. 16, Feb. 2015.
D. Cho, M. Lee, S. Kim and Y.-W. Tai, "Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction", Proc. IEEE Int. Conf. Comput. Vis. , pp. 3280-3287, Dec. 2013.
Stanford Lytro Light Field Archive , Oct. 2018, [online] Available: http :// lightfields . stanford . edu / .
M. Rerabek and T. Ebrahimi, "New light field image dataset", Proc. 8th Int. Conf. Qual. Multimedia Exper. , pp. 1-7, Jun. 2016.
G. Wu, Y. Liu, L. Fang, Q. Dai and T. Chai, "Light field reconstruction using convolutional network on EPI and extended applications", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 41, no. 7, pp. 1681-1694, Jul. 2018.
N. Meng, X. Wu, J. Liu and E. Lam, "High-order residual network for light field super-resolution" in Association for the Advancement of Artificial Intelligence, Palo Alto, CA, USA:AAAI Press, 2020.
N. K. Kalantari, T.-C. Wang and R. Ramamoorthi, "Learning-based view synthesis for light field cameras", ACM Trans. Graph. , vol. 35, no. 6, pp. 193, 2016.
S. Zhang, Y. Lin and H. Sheng, "Residual networks for light field image super-resolution", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 11046-11055, Jun. 2019.
R. A. Farrugia, C. Galea and C. Guillemot, "Super resolution of light field images using linear subspace projection of patch-volumes", IEEE J. Sel. Topics Signal Process. , vol. 11, no. 7, pp. 1058-1071, Oct. 2017.
H. W. F. Yeung, J. Hou, J. Chen, Y. Y. Chung and X. Chen, "Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues", Proc. Eur. Conf. Comput. Vis. , pp. 137-152, Sep. 2018.
Y. Wang, F. Liu, Z. Wang, G. Hou, Z. Sun and T. Tan, "End-to-end view synthesis for light field imaging with pseudo 4DCNN", Proc. Eur. Conf. Comput. Vis. , pp. 333-348, Sep. 2018.
I. Goodfellow, "Generative adversarial nets", Proc. Adv. Neural Inf. Process. Syst. , pp. 2672-2680, Dec. 2014.
G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, et al., "Light field image processing: An overview", IEEE J. Sel. Topics Signal Process. , vol. 11, no. 7, pp. 926-954, Oct. 2017.
G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai and Y. Liu, "Light field reconstruction using deep convolutional network on EPI", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 1638-1646, Jul. 2017.
R. A. Farrugia and C. Guillemot, "Light field super-resolution using a low-rank prior and deep convolutional neural networks", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 42, no. 5, pp. 1162-1175, May 2018.
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 770-778, Jun. 2016.
A. L. Maas, A. Y. Hannun and A. Y. Ng, "Rectifier nonlinearities improve neural network acoustic models", Proc. Int. Conf. Mach. Learn. , vol. 30, no. 1, pp. 3, 2013.
W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, et al., "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network", Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) , pp. 1874-1883, Jun. 2016.
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition", Proc. Int. Conf. Learn. Represent. , pp. 1-4, 2015.
N. Meng, T. Zeng and E. Y. Lam, "Perceptual loss for light field reconstruction in high-dimensional convolutional neural networks", Proc. Imag. Appl. Opt. , pp. 5, 2019.
X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks", Proc. 13th Int. Conf. Artif. Intell. Statist. , pp. 249-256, 2010.
S. Gross and M. Wilber, "Training and investigating residual nets", Facebook AI Res. , vol. 6, May 2016.
M. Ziegler, R. op het Veld, J. Keinert and F. Zilly, "Acquisition system for dense lightfield of large scenes", Proc. 3DTV Conf. True Vis. , pp. 1-4, Jun. 2017.
W.-S. Lai, J.-B. Huang, N. Ahuja and M.-H. Yang, "Fast and accurate image super-resolution with deep Laplacian pyramid networks", IEEE Trans. Pattern Anal. Mach. Intell. , vol. 41, no. 11, pp. 2599-2613, Nov. 2019.
Y. Zhang, Y. Tian, Y. Kong, B. Zhong and Y. Fu, "Residual dense network for image super-resolution", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , pp. 2472-2481, Jun. 2018.
K. Honauer, O. Johannsen, D. Kondermann and B. Goldluecke, "A dataset and evaluation methodology for depth estimation on 4D light fields", Proc. Asian Conf. Comput. Vis. , pp. 19-34, Nov. 2016.
M. S. K. Gul and B. K. Gunturk, "Spatial and angular resolution enhancement of light fields using convolutional neural networks", IEEE Trans. Image Process. , vol. 27, no. 5, pp. 2146-2159, May 2018.

IEEE Account

Change Username/Password
Update Address



Purchase Details

Payment Options
Order History
View Purchased Documents



Need Help?

US & Canada: +1 800 678 4333
Worldwide: +1 732 981 0060

Contact & Support


In computer vision and three-dimensional imaging, light field imaging has generated a lot of interest due to the designs of various capturing systems [1] , [2] . Compared to conventional cameras, a light field camera (also known as a plenoptic camera) allows one to capture both intensity values and directions of light rays from real-world scenes. The additional information enables many applications, such as image refocusing [3] , depth estimation [4] , [5] , and novel view generation [6] , [7] . However, there is an inherent tradeoff between spatial and angular resolutions. The generally lower spatial resolution of the light field image poses great challenges in exploiting the advantages brought from additional angular sampling [8] , [9] .
Taking advantage of the parallax between two neighboring views, the captured light field scenes preserve high correlations among the sub-aperture images (SAIs). Others addressing the light field super-resolution problem generally regard geometry properties as the reconstruction priors, and warp the neighboring views to the target view [10] , [11] . The performance of these methods depends on accurate geometric information of the scene as priors. However, approaches for depth estimation have difficulties in providing accurate depth estimation for pixel warping. Errors in this process give rise to artifacts such as tearing and ghosting.
To mitigate the dependency on explicit depth or disparity information, many alternative approaches are based on sampling and consecutive reconstruction of the plenoptic function [12] , [13] . Instead of using the disparity as auxiliary information, they consider each pixel of the given SAI as a sample of the light field distribution function. Recently, deep learning has been proved to be a powerful technique in a wide range of applications [14] , [15] . With the availability of the light field dataset [16] , methods based on the convolutional neural networks (CNNs) have been successfully applied to light field super-resolution [17] , [18] . Yoon et al. [19] establish the first deep learning framework LFCNN for both spatial and angular super-resolution but do not exploit the correlation among adjacent views. Wang et al. [20] regard the light field as an image sequence and introduce the bidirectional recurrent convolutional neural network to approximate the correspondences of neighboring images. However, the image sequence assumption reduces the complexity of light field angular correlation and changes the relations among the surrounding SAIs, which consequently limits the reconstruction results.
Given the inherent geometric properties of light field data, reconstruction algorithms should involve information from both spatial and angular dimensions. However, existing methods struggle to handle the uncertainty in recovering lost high-frequency spatial details while preserving angular correlation. In this paper, we propose a generative model to effectively address the light field super-resolution problem. The generative adversarial networks (GANs) are well known for their powerful capacity in generating plausible-looking natural images with high perceptual quality [21] , [22] . Considering such benefits, we incorporate the high-dimensional convolution (HDC) l
Миа Малкова перепихнулась на работе
Фигуристая Клара ебется с тремя неграми
Лиза в прозрачных чулках с трусами и без них

Report Page